Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
A Bayesian sequential processor approach to spectroscopic portal system decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sale, K; Candy, J; Breitfeller, E
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Adaptive sequential Bayesian classification using Page's test
NASA Astrophysics Data System (ADS)
Lynch, Robert S., Jr.; Willett, Peter K.
2002-03-01
In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
Bayesian design of decision rules for failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Sequential structural damage diagnosis algorithm using a change point detection method
NASA Astrophysics Data System (ADS)
Noh, H.; Rajagopal, R.; Kiremidjian, A. S.
2013-11-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A
2011-07-01
There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.
Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.
2014-01-01
Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792
Optimal Sequential Rules for Computer-Based Instruction.
ERIC Educational Resources Information Center
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Signal Detection and Monitoring Based on Longitudinal Healthcare Data
Suling, Marc; Pigeot, Iris
2012-01-01
Post-marketing detection and surveillance of potential safety hazards are crucial tasks in pharmacovigilance. To uncover such safety risks, a wide set of techniques has been developed for spontaneous reporting data and, more recently, for longitudinal data. This paper gives a broad overview of the signal detection process and introduces some types of data sources typically used. The most commonly applied signal detection algorithms are presented, covering simple frequentistic methods like the proportional reporting rate or the reporting odds ratio, more advanced Bayesian techniques for spontaneous and longitudinal data, e.g., the Bayesian Confidence Propagation Neural Network or the Multi-item Gamma-Poisson Shrinker and methods developed for longitudinal data only, like the IC temporal pattern detection. Additionally, the problem of adjustment for underlying confounding is discussed and the most common strategies to automatically identify false-positive signals are addressed. A drug monitoring technique based on Wald’s sequential probability ratio test is presented. For each method, a real-life application is given, and a wide set of literature for further reading is referenced. PMID:24300373
Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem; Jasche, Jens
2016-01-01
This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.
Decision Making and Learning while Taking Sequential Risks
ERIC Educational Resources Information Center
Pleskac, Timothy J.
2008-01-01
A sequential risk-taking paradigm used to identify real-world risk takers invokes both learning and decision processes. This article expands the paradigm to a larger class of tasks with different stochastic environments and different learning requirements. Generalizing a Bayesian sequential risk-taking model to the larger set of tasks clarifies…
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
A Bayesian Theory of Sequential Causal Learning and Abstract Transfer
ERIC Educational Resources Information Center
Lu, Hongjing; Rojas, Randall R.; Beckers, Tom; Yuille, Alan L.
2016-01-01
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about…
Biochemical transport modeling, estimation, and detection in realistic environments
NASA Astrophysics Data System (ADS)
Ortner, Mathias; Nehorai, Arye
2006-05-01
Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.
2012-05-30
annealing-based or Bayesian sequential simulation approaches B. Dafflon1,2 and W. Barrash1 Received 13 May 2011; revised 12 March 2012; accepted 17 April 2012...the withheld porosity log are also withheld for this estimation process. For both cases we do this for two wells having locally variable stratigraphy ...borehole location is given at the bottom of each log comparison panel. For comparison with stratigraphy at the BHRS, contacts between Units 1 to 4
Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals
Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E
2018-01-01
Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587
Bayesian randomized clinical trials: From fixed to adaptive design.
Yin, Guosheng; Lam, Chi Kin; Shi, Haolun
2017-08-01
Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Friston, Karl J.; Dolan, Raymond J.
2017-01-01
Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects’ choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates. PMID:28486504
ERIC Educational Resources Information Center
Si, Yajuan; Reiter, Jerome P.
2013-01-01
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
NASA Astrophysics Data System (ADS)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
Contamination Event Detection with Multivariate Time-Series Data in Agricultural Water Monitoring †
Mao, Yingchi; Qi, Hai; Ping, Ping; Li, Xiaofang
2017-01-01
Time series data of multiple water quality parameters are obtained from the water sensor networks deployed in the agricultural water supply network. The accurate and efficient detection and warning of contamination events to prevent pollution from spreading is one of the most important issues when pollution occurs. In order to comprehensively reduce the event detection deviation, a spatial–temporal-based event detection approach with multivariate time-series data for water quality monitoring (M-STED) was proposed. The M-STED approach includes three parts. The first part is that M-STED adopts a Rule K algorithm to select backbone nodes as the nodes in the CDS, and forward the sensed data of multiple water parameters. The second part is to determine the state of each backbone node with back propagation neural network models and the sequential Bayesian analysis in the current timestamp. The third part is to establish a spatial model with Bayesian networks to estimate the state of the backbones in the next timestamp and trace the “outlier” node to its neighborhoods to detect a contamination event. The experimental results indicate that the average detection rate is more than 80% with M-STED and the false detection rate is lower than 9%, respectively. The M-STED approach can improve the rate of detection by about 40% and reduce the false alarm rate by about 45%, compared with the event detection with a single water parameter algorithm, S-STED. Moreover, the proposed M-STED can exhibit better performance in terms of detection delay and scalability. PMID:29207535
Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.
Candy, J V
2015-09-01
The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
Chan, Cheng Leng; Rudrappa, Sowmya; Ang, Pei San; Li, Shu Chuen; Evans, Stephen J W
2017-08-01
The ability to detect safety concerns from spontaneous adverse drug reaction reports in a timely and efficient manner remains important in public health. This paper explores the behaviour of the Sequential Probability Ratio Test (SPRT) and ability to detect signals of disproportionate reporting (SDRs) in the Singapore context. We used SPRT with a combination of two hypothesised relative risks (hRRs) of 2 and 4.1 to detect signals of both common and rare adverse events in our small database. We compared SPRT with other methods in terms of number of signals detected and whether labelled adverse drug reactions were detected or the reaction terms were considered serious. The other methods used were reporting odds ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN) and Gamma Poisson Shrinker (GPS). The SPRT produced 2187 signals in common with all methods, 268 unique signals, and 70 signals in common with at least one other method, and did not produce signals in 178 cases where two other methods detected them, and there were 403 signals unique to one of the other methods. In terms of sensitivity, ROR performed better than other methods, but the SPRT method found more new signals. The performances of the methods were similar for negative predictive value and specificity. Using a combination of hRRs for SPRT could be a useful screening tool for regulatory agencies, and more detailed investigation of the medical utility of the system is merited.
A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.
Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L
2016-03-01
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.
Aksoy, Ozan; Weesie, Jeroen
2014-05-01
In this paper, using a within-subjects design, we estimate the utility weights that subjects attach to the outcome of their interaction partners in four decision situations: (1) binary Dictator Games (DG), second player's role in the sequential Prisoner's Dilemma (PD) after the first player (2) cooperated and (3) defected, and (4) first player's role in the sequential Prisoner's Dilemma game. We find that the average weights in these four decision situations have the following order: (1)>(2)>(4)>(3). Moreover, the average weight is positive in (1) but negative in (2), (3), and (4). Our findings indicate the existence of strong negative and small positive reciprocity for the average subject, but there is also high interpersonal variation in the weights in these four nodes. We conclude that the PD frame makes subjects more competitive than the DG frame. Using hierarchical Bayesian modeling, we simultaneously analyze beliefs of subjects about others' utility weights in the same four decision situations. We compare several alternative theoretical models on beliefs, e.g., rational beliefs (Bayesian-Nash equilibrium) and a consensus model. Our results on beliefs strongly support the consensus effect and refute rational beliefs: there is a strong relationship between own preferences and beliefs and this relationship is relatively stable across the four decision situations. Copyright © 2014 Elsevier Inc. All rights reserved.
Temporally consistent probabilistic detection of new multiple sclerosis lesions in brain MRI.
Elliott, Colm; Arnold, Douglas L; Collins, D Louis; Arbel, Tal
2013-08-01
Detection of new Multiple Sclerosis (MS) lesions on magnetic resonance imaging (MRI) is important as a marker of disease activity and as a potential surrogate for relapses. We propose an approach where sequential scans are jointly segmented, to provide a temporally consistent tissue segmentation while remaining sensitive to newly appearing lesions. The method uses a two-stage classification process: 1) a Bayesian classifier provides a probabilistic brain tissue classification at each voxel of reference and follow-up scans, and 2) a random-forest based lesion-level classification provides a final identification of new lesions. Generative models are learned based on 364 scans from 95 subjects from a multi-center clinical trial. The method is evaluated on sequential brain MRI of 160 subjects from a separate multi-center clinical trial, and is compared to 1) semi-automatically generated ground truth segmentations and 2) fully manual identification of new lesions generated independently by nine expert raters on a subset of 60 subjects. For new lesions greater than 0.15 cc in size, the classifier has near perfect performance (99% sensitivity, 2% false detection rate), as compared to ground truth. The proposed method was also shown to exceed the performance of any one of the nine expert manual identifications.
Automated Monitoring with a BSP Fault-Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L.; Herzog, James P.
2003-01-01
The figure schematically illustrates a method and procedure for automated monitoring of an asset, as well as a hardware- and-software system that implements the method and procedure. As used here, asset could signify an industrial process, power plant, medical instrument, aircraft, or any of a variety of other systems that generate electronic signals (e.g., sensor outputs). In automated monitoring, the signals are digitized and then processed in order to detect faults and otherwise monitor operational status and integrity of the monitored asset. The major distinguishing feature of the present method is that the fault-detection function is implemented by use of a Bayesian sequential probability (BSP) technique. This technique is superior to other techniques for automated monitoring because it affords sensitivity, not only to disturbances in the mean values, but also to very subtle changes in the statistical characteristics (variance, skewness, and bias) of the monitored signals.
Mining patterns in persistent surveillance systems with smart query and visual analytics
NASA Astrophysics Data System (ADS)
Habibi, Mohammad S.; Shirkhodaie, Amir
2013-05-01
In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.
EEG-fMRI Bayesian framework for neural activity estimation: a simulation study
NASA Astrophysics Data System (ADS)
Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo
2016-12-01
Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.
EEG-fMRI Bayesian framework for neural activity estimation: a simulation study.
Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Gratta, Cosimo Del
2016-12-01
Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Update on Bayesian Blocks: Segmented Models for Sequential Data
NASA Technical Reports Server (NTRS)
Scargle, Jeff
2017-01-01
The Bayesian Block algorithm, in wide use in astronomy and other areas, has been improved in several ways. The model for block shape has been generalized to include other than constant signal rate - e.g., linear, exponential, or other parametric models. In addition the computational efficiency has been improved, so that instead of O(N**2) the basic algorithm is O(N) in most cases. Other improvements in the theory and application of segmented representations will be described.
Sequential Inverse Problems Bayesian Principles and the Logistic Map Example
NASA Astrophysics Data System (ADS)
Duan, Lian; Farmer, Chris L.; Moroz, Irene M.
2010-09-01
Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.
Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-01-01
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks [Scargle 1998]-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piece- wise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by [Arias-Castro, Donoho and Huo 2003]. In the spirit of Reproducible Research [Donoho et al. (2008)] all of the code and data necessary to reproduce all of the figures in this paper are included as auxiliary material.
NASA Technical Reports Server (NTRS)
Chapman, G. M. (Principal Investigator); Carnes, J. G.
1981-01-01
Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.
NASA Astrophysics Data System (ADS)
Umehara, Hiroaki; Okada, Masato; Naruse, Yasushi
2018-03-01
The estimation of angular time series data is a widespread issue relating to various situations involving rotational motion and moving objects. There are two kinds of problem settings: the estimation of wrapped angles, which are principal values in a circular coordinate system (e.g., the direction of an object), and the estimation of unwrapped angles in an unbounded coordinate system such as for the positioning and tracking of moving objects measured by the signal-wave phase. Wrapped angles have been estimated in previous studies by sequential Bayesian filtering; however, the hyperparameters that are to be solved and that control the properties of the estimation model were given a priori. The present study establishes a procedure of hyperparameter estimation from the observation data of angles only, using the framework of Bayesian inference completely as the maximum likelihood estimation. Moreover, the filter model is modified to estimate the unwrapped angles. It is proved that without noise our model reduces to the existing algorithm of Itoh's unwrapping transform. It is numerically confirmed that our model is an extension of unwrapping estimation from Itoh's unwrapping transform to the case with noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
The multicategory case of the sequential Bayesian pixel selection and estimation procedure
NASA Technical Reports Server (NTRS)
Pore, M. D.; Dennis, T. B. (Principal Investigator)
1980-01-01
A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.
NASA Astrophysics Data System (ADS)
Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria
2017-08-01
Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.
2010-12-01
Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.
Zhang, Jian; Hou, Dibo; Wang, Ke; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo
2017-05-01
The detection of organic contaminants in water distribution systems is essential to protect public health from potential harmful compounds resulting from accidental spills or intentional releases. Existing methods for detecting organic contaminants are based on quantitative analyses such as chemical testing and gas/liquid chromatography, which are time- and reagent-consuming and involve costly maintenance. This study proposes a novel procedure based on discrete wavelet transform and principal component analysis for detecting organic contamination events from ultraviolet spectral data. Firstly, the spectrum of each observation is transformed using discrete wavelet with a coiflet mother wavelet to capture the abrupt change along the wavelength. Principal component analysis is then employed to approximate the spectra based on capture and fusion features. The significant value of Hotelling's T 2 statistics is calculated and used to detect outliers. An alarm of contamination event is triggered by sequential Bayesian analysis when the outliers appear continuously in several observations. The effectiveness of the proposed procedure is tested on-line using a pilot-scale setup and experimental data.
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
Decision theory, reinforcement learning, and the brain.
Dayan, Peter; Daw, Nathaniel D
2008-12-01
Decision making is a core competence for animals and humans acting and surviving in environments they only partially comprehend, gaining rewards and punishments for their troubles. Decision-theoretic concepts permeate experiments and computational models in ethology, psychology, and neuroscience. Here, we review a well-known, coherent Bayesian approach to decision making, showing how it unifies issues in Markovian decision problems, signal detection psychophysics, sequential sampling, and optimal exploration and discuss paradigmatic psychological and neural examples of each problem. We discuss computational issues concerning what subjects know about their task and how ambitious they are in seeking optimal solutions; we address algorithmic topics concerning model-based and model-free methods for making choices; and we highlight key aspects of the neural implementation of decision making.
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Tweedell, Andrew J.; Haynes, Courtney A.
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Neutron multiplicity counting: Confidence intervals for reconstruction parameters
Verbeke, Jerome M.
2016-03-09
From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less
BATSE gamma-ray burst line search. 2: Bayesian consistency methodology
NASA Technical Reports Server (NTRS)
Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.
1994-01-01
We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.
A Memory-Based Theory of Verbal Cognition
ERIC Educational Resources Information Center
Dennis, Simon
2005-01-01
The syntagmatic paradigmatic model is a distributed, memory-based account of verbal processing. Built on a Bayesian interpretation of string edit theory, it characterizes the control of verbal cognition as the retrieval of sets of syntagmatic and paradigmatic constraints from sequential and relational long-term memory and the resolution of these…
Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.
Carrière, Olivier; Hermand, Jean-Pierre
2012-04-01
Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.
Florin, Charles; Paragios, Nikos; Williams, Jim
2005-01-01
In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.
Anomaly Detection in Dynamic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turcotte, Melissa
2014-10-14
Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less
The PMHT: solutions for some of its problems
NASA Astrophysics Data System (ADS)
Wieneke, Monika; Koch, Wolfgang
2007-09-01
Tracking multiple targets in a cluttered environment is a challenging task. Probabilistic Multiple Hypothesis Tracking (PMHT) is an efficient approach for dealing with it. Essentially PMHT is based on the method of Expectation-Maximization for handling with association conflicts. Linearity in the number of targets and measurements is the main motivation for a further development and extension of this methodology. Unfortunately, compared with the Probabilistic Data Association Filter (PDAF), PMHT has not yet shown its superiority in terms of track-lost statistics. Furthermore, the problem of track extraction and deletion is apparently not yet satisfactorily solved within this framework. Four properties of PMHT are responsible for its problems in track maintenance: Non-Adaptivity, Hospitality, Narcissism and Local Maxima. 1, 2 In this work we present a solution for each of them and derive an improved PMHT by integrating the solutions into the PMHT formalism. The new PMHT is evaluated by Monte-Carlo simulations. A sequential Likelihood-Ratio (LR) test for track extraction has been developed and already integrated into the framework of traditional Bayesian Multiple Hypothesis Tracking. 3 As a multi-scan approach, also the PMHT methodology has the potential for track extraction. In this paper an analogous integration of a sequential LR test into the PMHT framework is proposed. We present an LR formula for track extraction and deletion using the PMHT update formulae. As PMHT provides all required ingredients for a sequential LR calculation, the LR is thus a by-product of the PMHT iteration process. Therefore the resulting update formula for the sequential LR test affords the development of Track-Before-Detect algorithms for PMHT. The approach is illustrated by a simple example.
Online Bayesian Learning with Natural Sequential Prior Distribution Used for Wind Speed Prediction
NASA Astrophysics Data System (ADS)
Cheggaga, Nawal
2017-11-01
Predicting wind speed is one of the most important and critic tasks in a wind farm. All approaches, which directly describe the stochastic dynamics of the meteorological data are facing problems related to the nature of its non-Gaussian statistics and the presence of seasonal effects .In this paper, Online Bayesian learning has been successfully applied to online learning for three-layer perceptron's used for wind speed prediction. First a conventional transition model based on the squared norm of the difference between the current parameter vector and the previous parameter vector has been used. We noticed that the transition model does not adequately consider the difference between the current and the previous wind speed measurement. To adequately consider this difference, we use a natural sequential prior. The proposed transition model uses a Fisher information matrix to consider the difference between the observation models more naturally. The obtained results showed a good agreement between both series, measured and predicted. The mean relative error over the whole data set is not exceeding 5 %.
Incremental Bayesian Category Learning From Natural Language.
Frermann, Lea; Lapata, Mirella
2016-08-01
Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .). Copyright © 2015 Cognitive Science Society, Inc.
As-built design specification for proportion estimate software subsystem
NASA Technical Reports Server (NTRS)
Obrien, S. (Principal Investigator)
1980-01-01
The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.
Variable Discretisation for Anomaly Detection using Bayesian Networks
2017-01-01
UNCLASSIFIED DST- Group –TR–3328 1 Introduction Bayesian network implementations usually require each variable to take on a finite number of mutually...UNCLASSIFIED Variable Discretisation for Anomaly Detection using Bayesian Networks Jonathan Legg National Security and ISR Division Defence Science...and Technology Group DST- Group –TR–3328 ABSTRACT Anomaly detection is the process by which low probability events are automatically found against a
A new prior for bayesian anomaly detection: application to biosurveillance.
Shen, Y; Cooper, G F
2010-01-01
Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.
Improving the Accuracy of Planet Occurrence Rates from Kepler Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Hsu, Danley C.; Ford, Eric B.; Ragozzine, Darin; Morehead, Robert C.
2018-05-01
We present a new framework to characterize the occurrence rates of planet candidates identified by Kepler based on hierarchical Bayesian modeling, approximate Bayesian computing (ABC), and sequential importance sampling. For this study, we adopt a simple 2D grid in planet radius and orbital period as our model and apply our algorithm to estimate occurrence rates for Q1–Q16 planet candidates orbiting solar-type stars. We arrive at significantly increased planet occurrence rates for small planet candidates (R p < 1.25 R ⊕) at larger orbital periods (P > 80 day) compared to the rates estimated by the more common inverse detection efficiency method (IDEM). Our improved methodology estimates that the occurrence rate density of small planet candidates in the habitable zone of solar-type stars is {1.6}-0.5+1.2 per factor of 2 in planet radius and orbital period. Additionally, we observe a local minimum in the occurrence rate for strong planet candidates marginalized over orbital period between 1.5 and 2 R ⊕ that is consistent with previous studies. For future improvements, the forward modeling approach of ABC is ideally suited to incorporating multiple populations, such as planets, astrophysical false positives, and pipeline false alarms, to provide accurate planet occurrence rates and uncertainties. Furthermore, ABC provides a practical statistical framework for answering complex questions (e.g., frequency of different planetary architectures) and providing sound uncertainties, even in the face of complex selection effects, observational biases, and follow-up strategies. In summary, ABC offers a powerful tool for accurately characterizing a wide variety of astrophysical populations.
NASA Astrophysics Data System (ADS)
Tsionas, Mike G.; Michaelides, Panayotis G.
2017-09-01
We use a novel Bayesian inference procedure for the Lyapunov exponent in the dynamical system of returns and their unobserved volatility. In the dynamical system, computation of largest Lyapunov exponent by traditional methods is impossible as the stochastic nature has to be taken explicitly into account due to unobserved volatility. We apply the new techniques to daily stock return data for a group of six countries, namely USA, UK, Switzerland, Netherlands, Germany and France, from 2003 to 2014, by means of Sequential Monte Carlo for Bayesian inference. The evidence points to the direction that there is indeed noisy chaos both before and after the recent financial crisis. However, when a much simpler model is examined where the interaction between returns and volatility is not taken into consideration jointly, the hypothesis of chaotic dynamics does not receive much support by the data ("neglected chaos").
Rich Analysis and Rational Models: Inferring Individual Behavior from Infant Looking Data
ERIC Educational Resources Information Center
Piantadosi, Steven T.; Kidd, Celeste; Aslin, Richard
2014-01-01
Studies of infant looking times over the past 50 years have provided profound insights about cognitive development, but their dependent measures and analytic techniques are quite limited. In the context of infants' attention to discrete sequential events, we show how a Bayesian data analysis approach can be combined with a rational cognitive…
Wu, Wei Mo; Wang, Jia Qiang; Cao, Qi; Wu, Jia Ping
2017-02-01
Accurate prediction of soil organic carbon (SOC) distribution is crucial for soil resources utilization and conservation, climate change adaptation, and ecosystem health. In this study, we selected a 1300 m×1700 m solonchak sampling area in northern Tarim Basin, Xinjiang, China, and collected a total of 144 soil samples (5-10 cm). The objectives of this study were to build a Baye-sian geostatistical model to predict SOC content, and to assess the performance of the Bayesian model for the prediction of SOC content by comparing with other three geostatistical approaches [ordinary kriging (OK), sequential Gaussian simulation (SGS), and inverse distance weighting (IDW)]. In the study area, soil organic carbon contents ranged from 1.59 to 9.30 g·kg -1 with a mean of 4.36 g·kg -1 and a standard deviation of 1.62 g·kg -1 . Sample semivariogram was best fitted by an exponential model with the ratio of nugget to sill being 0.57. By using the Bayesian geostatistical approach, we generated the SOC content map, and obtained the prediction variance, upper 95% and lower 95% of SOC contents, which were then used to evaluate the prediction uncertainty. Bayesian geostatistical approach performed better than that of the OK, SGS and IDW, demonstrating the advantages of Bayesian approach in SOC prediction.
Towards a global flood detection system using social media
NASA Astrophysics Data System (ADS)
de Bruijn, Jens; de Moel, Hans; Jongman, Brenden; Aerts, Jeroen
2017-04-01
It is widely recognized that an early warning is critical in improving international disaster response. Analysis of social media in real-time can provide valuable information about an event or help to detect unexpected events. For successful and reliable detection systems that work globally, it is important that sufficient data is available and that the algorithm works both in data-rich and data-poor environments. In this study, both a new geotagging system and multi-level event detection system for flood hazards was developed using Twitter data. Geotagging algorithms that regard one tweet as a single document are well-studied. However, no algorithms exist that combine several sequential tweets mentioning keywords regarding a specific event type. Within the time frame of an event, multiple users use event related keywords that refer to the same place name. This notion allows us to treat several sequential tweets posted in the last 24 hours as one document. For all these tweets, we collect a series of spatial indicators given in the tweet metadata and extract additional topological indicators from the text. Using these indicators, we can reduce ambiguity and thus better estimate what locations are tweeted about. Using these localized tweets, Bayesian change-point analysis is used to find significant increases of tweets mentioning countries, provinces or towns. In data-poor environments detection of events on a country level is possible, while in other, data-rich, environments detection on a city level is achieved. Additionally, on a city-level we analyse the spatial dependence of mentioned places. If multiple places within a limited spatial extent are mentioned, detection confidence increases. We run the algorithm using 2 years of Twitter data with flood related keywords in 13 major languages and validate against a flood event database. We find that the geotagging algorithm yields significantly more data than previously developed algorithms and successfully deals with ambiguous place names. In addition, we show that our detection system can both quickly and reliably detect floods, even in countries where data is scarce, while achieving high detail in countries where more data is available.
NASA Astrophysics Data System (ADS)
Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik
2018-05-01
Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.
Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation
NASA Astrophysics Data System (ADS)
Bueno, Diana R.; Montano, L.
2017-04-01
Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.
Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.
2016-01-01
Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Prediction of near-term breast cancer risk using a Bayesian belief network
NASA Astrophysics Data System (ADS)
Zheng, Bin; Ramalingam, Pandiyarajan; Hariharan, Harishwaran; Leader, Joseph K.; Gur, David
2013-03-01
Accurately predicting near-term breast cancer risk is an important prerequisite for establishing an optimal personalized breast cancer screening paradigm. In previous studies, we investigated and tested the feasibility of developing a unique near-term breast cancer risk prediction model based on a new risk factor associated with bilateral mammographic density asymmetry between the left and right breasts of a woman using a single feature. In this study we developed a multi-feature based Bayesian belief network (BBN) that combines bilateral mammographic density asymmetry with three other popular risk factors, namely (1) age, (2) family history, and (3) average breast density, to further increase the discriminatory power of our cancer risk model. A dataset involving "prior" negative mammography examinations of 348 women was used in the study. Among these women, 174 had breast cancer detected and verified in the next sequential screening examinations, and 174 remained negative (cancer-free). A BBN was applied to predict the risk of each woman having cancer detected six to 18 months later following the negative screening mammography. The prediction results were compared with those using single features. The prediction accuracy was significantly increased when using the BBN. The area under the ROC curve increased from an AUC=0.70 to 0.84 (p<0.01), while the positive predictive value (PPV) and negative predictive value (NPV) also increased from a PPV=0.61 to 0.78 and an NPV=0.65 to 0.75, respectively. This study demonstrates that a multi-feature based BBN can more accurately predict the near-term breast cancer risk than with a single feature.
Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis
Heard, Nicholas A.; Turcotte, Melissa J. M.
2016-05-21
Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less
Leontaridou, Maria; Gabbert, Silke; Van Ierland, Ekko C; Worth, Andrew P; Landsiedel, Robert
2016-07-01
This paper offers a Bayesian Value-of-Information (VOI) analysis for guiding the development of non-animal testing strategies, balancing information gains from testing with the expected social gains and costs from the adoption of regulatory decisions. Testing is assumed to have value, if, and only if, the information revealed from testing triggers a welfare-improving decision on the use (or non-use) of a substance. As an illustration, our VOI model is applied to a set of five individual non-animal prediction methods used for skin sensitisation hazard assessment, seven battery combinations of these methods, and 236 sequential 2-test and 3-test strategies. Their expected values are quantified and compared to the expected value of the local lymph node assay (LLNA) as the animal method. We find that battery and sequential combinations of non-animal prediction methods reveal a significantly higher expected value than the LLNA. This holds for the entire range of prior beliefs. Furthermore, our results illustrate that the testing strategy with the highest expected value does not necessarily have to follow the order of key events in the sensitisation adverse outcome pathway (AOP). 2016 FRAME.
Context-dependent decision-making: a simple Bayesian model
Lloyd, Kevin; Leslie, David S.
2013-01-01
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101
Context-dependent decision-making: a simple Bayesian model.
Lloyd, Kevin; Leslie, David S
2013-05-06
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.
Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks
NASA Astrophysics Data System (ADS)
Tsakmalis, Anestis; Chatzinotas, Symeon; Ottersten, Bjorn
2018-02-01
In this paper, a sequential probing method for interference constraint learning is proposed to allow a centralized Cognitive Radio Network (CRN) accessing the frequency band of a Primary User (PU) in an underlay cognitive scenario with a designed PU protection specification. The main idea is that the CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire the binary ACK/NACK packet. This feedback indicates whether the probing-induced interference is harmful or not and can be used to learn the PU interference constraint. The cognitive part of this sequential probing process is the selection of the power levels of the Secondary Users (SUs) which aims to learn the PU interference constraint with a minimum number of probing attempts while setting a limit on the number of harmful probing-induced interference events or equivalently of NACK packet observations over a time window. This constrained design problem is studied within the Active Learning (AL) framework and an optimal solution is derived and implemented with a sophisticated, accurate and fast Bayesian Learning method, the Expectation Propagation (EP). The performance of this solution is also demonstrated through numerical simulations and compared with modified versions of AL techniques we developed in earlier work.
Sequential causal inference: Application to randomized trials of adaptive treatment strategies
Dawson, Ree; Lavori, Philip W.
2009-01-01
SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time
NASA Astrophysics Data System (ADS)
Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.
2017-12-01
We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.
Chung, Sukhoon; Rhee, Hyunsill; Suh, Yongmoo
2010-01-01
Objectives This study sought to find answers to the following questions: 1) Can we predict whether a patient will revisit a healthcare center? 2) Can we anticipate diseases of patients who revisit the center? Methods For the first question, we applied 5 classification algorithms (decision tree, artificial neural network, logistic regression, Bayesian networks, and Naïve Bayes) and the stacking-bagging method for building classification models. To solve the second question, we performed sequential pattern analysis. Results We determined: 1) In general, the most influential variables which impact whether a patient of a public healthcare center will revisit it or not are personal burden, insurance bill, period of prescription, age, systolic pressure, name of disease, and postal code. 2) The best plain classification model is dependent on the dataset. 3) Based on average of classification accuracy, the proposed stacking-bagging method outperformed all traditional classification models and our sequential pattern analysis revealed 16 sequential patterns. Conclusions Classification models and sequential patterns can help public healthcare centers plan and implement healthcare service programs and businesses that are more appropriate to local residents, encouraging them to revisit public health centers. PMID:21818426
Bayesian methods for outliers detection in GNSS time series
NASA Astrophysics Data System (ADS)
Qianqian, Zhang; Qingming, Gui
2013-07-01
This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems
NASA Astrophysics Data System (ADS)
Granmo, Ole-Christoffer
The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.
Real-Time Detection and Tracking of Multiple People in Laser Scan Frames
NASA Astrophysics Data System (ADS)
Cui, J.; Song, X.; Zhao, H.; Zha, H.; Shibasaki, R.
This chapter presents an approach to detect and track multiple people ro bustly in real time using laser scan frames. The detection and tracking of people in real time is a problem that arises in a variety of different contexts. Examples in clude intelligent surveillance for security purposes, scene analysis for service robot, and crowd behavior analysis for human behavior study. Over the last several years, an increasing number of laser-based people-tracking systems have been developed in both mobile robotics platforms and fixed platforms using one or multiple laser scanners. It has been proved that processing on laser scanner data makes the tracker much faster and more robust than a vision-only based one in complex situations. In this chapter, we present a novel robust tracker to detect and track multiple people in a crowded and open area in real time. First, raw data are obtained that measures two legs for each people at a height of 16 cm from horizontal ground with multiple registered laser scanners. A stable feature is extracted using accumulated distribu tion of successive laser frames. In this way, the noise that generates split and merged measurements is smoothed well, and the pattern of rhythmic swinging legs is uti lized to extract each leg. Second, a probabilistic tracking model is presented, and then a sequential inference process using a Bayesian rule is described. A sequential inference process is difficult to compute analytically, so two strategies are presented to simplify the computation. In the case of independent tracking, the Kalman fil ter is used with a more efficient measurement likelihood model based on a region coherency property. Finally, to deal with trajectory fragments we present a concise approach to fuse just a little visual information from synchronized video camera to laser data. Evaluation with real data shows that the proposed method is robust and effective. It achieves a significant improvement compared with existing laser-based trackers.
Architectures of Kepler Planet Systems with Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Morehead, Robert C.; Ford, Eric B.
2015-12-01
The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.
Exoplanet Biosignatures: Future Directions.
Walker, Sara I; Bains, William; Cronin, Leroy; DasSarma, Shiladitya; Danielache, Sebastian; Domagal-Goldman, Shawn; Kacar, Betul; Kiang, Nancy Y; Lenardic, Adrian; Reinhard, Christopher T; Moore, William; Schwieterman, Edward W; Shkolnik, Evgenya L; Smith, Harrison B
2018-06-01
We introduce a Bayesian method for guiding future directions for detection of life on exoplanets. We describe empirical and theoretical work necessary to place constraints on the relevant likelihoods, including those emerging from better understanding stellar environment, planetary climate and geophysics, geochemical cycling, the universalities of physics and chemistry, the contingencies of evolutionary history, the properties of life as an emergent complex system, and the mechanisms driving the emergence of life. We provide examples for how the Bayesian formalism could guide future search strategies, including determining observations to prioritize or deciding between targeted searches or larger lower resolution surveys to generate ensemble statistics and address how a Bayesian methodology could constrain the prior probability of life with or without a positive detection. Key Words: Exoplanets-Biosignatures-Life detection-Bayesian analysis. Astrobiology 18, 779-824.
Eser, Alexander; Primas, Christian; Reinisch, Sieglinde; Vogelsang, Harald; Novacek, Gottfried; Mould, Diane R; Reinisch, Walter
2018-01-30
Despite a robust exposure-response relationship of infliximab in inflammatory bowel disease (IBD), attempts to adjust dosing to individually predicted serum concentrations of infliximab (SICs) are lacking. Compared with labor-intensive conventional software for pharmacokinetic (PK) modeling (eg, NONMEM) dashboards are easy-to-use programs incorporating complex Bayesian statistics to determine individual pharmacokinetics. We evaluated various infliximab detection assays and the number of samples needed to precisely forecast individual SICs using a Bayesian dashboard. We assessed long-term infliximab retention in patients being dosed concordantly versus discordantly with Bayesian dashboard recommendations. Three hundred eighty-two serum samples from 117 adult IBD patients on infliximab maintenance therapy were analyzed by 3 commercially available assays. Data from each assay was modeled using NONMEM and a Bayesian dashboard. PK parameter precision and residual variability were assessed. Forecast concentrations from both systems were compared with observed concentrations. Infliximab retention was assessed by prediction for dose intensification via Bayesian dashboard versus real-life practice. Forecast precision of SICs varied between detection assays. At least 3 SICs from a reliable assay are needed for an accurate forecast. The Bayesian dashboard performed similarly to NONMEM to predict SICs. Patients dosed concordantly with Bayesian dashboard recommendations had a significantly longer median drug survival than those dosed discordantly (51.5 versus 4.6 months, P < .0001). The Bayesian dashboard helps to assess the diagnostic performance of infliximab detection assays. Three, not single, SICs provide sufficient information for individualized dose adjustment when incorporated into the Bayesian dashboard. Treatment adjusted to forecasted SICs is associated with longer drug retention of infliximab. © 2018, The American College of Clinical Pharmacology.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
Astrocytic tracer dynamics estimated from [1-¹¹C]-acetate PET measurements.
Arnold, Andrea; Calvetti, Daniela; Gjedde, Albert; Iversen, Peter; Somersalo, Erkki
2015-12-01
We address the problem of estimating the unknown parameters of a model of tracer kinetics from sequences of positron emission tomography (PET) scan data using a statistical sequential algorithm for the inference of magnitudes of dynamic parameters. The method, based on Bayesian statistical inference, is a modification of a recently proposed particle filtering and sequential Monte Carlo algorithm, where instead of preassigning the accuracy in the propagation of each particle, we fix the time step and account for the numerical errors in the innovation term. We apply the algorithm to PET images of [1-¹¹C]-acetate-derived tracer accumulation, estimating the transport rates in a three-compartment model of astrocytic uptake and metabolism of the tracer for a cohort of 18 volunteers from 3 groups, corresponding to healthy control individuals, cirrhotic liver and hepatic encephalopathy patients. The distribution of the parameters for the individuals and for the groups presented within the Bayesian framework support the hypothesis that the parameters for the hepatic encephalopathy group follow a significantly different distribution than the other two groups. The biological implications of the findings are also discussed. © The Authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Gloaguen, Erwan; Holliger, Klaus
2013-04-01
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches to the regional scale still represents a major challenge, yet is critically important for the development of groundwater flow and contaminant transport models. To address this issue, we have developed a regional-scale hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure. The objective is to simulate the regional-scale distribution of a hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, our approach first involves linking the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. We present the application of this methodology to a pertinent field scenario, where we consider collocated high-resolution measurements of the electrical conductivity, measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, estimated from EM flowmeter and slug test measurements, in combination with low-resolution exhaustive electrical conductivity estimates obtained from dipole-dipole ERT meausurements.
Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry.
Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen
2010-11-01
In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.
Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry
Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen
2011-01-01
In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000–15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert’s visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition. PMID:21544266
Recursive Bayesian recurrent neural networks for time-series modeling.
Mirikitani, Derrick T; Nikolaev, Nikolay
2010-02-01
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2012-12-01
In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.
Pehkonen, Petri; Wong, Garry; Törönen, Petri
2010-01-01
Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.
Bayesian approach for assessing non-inferiority in a three-arm trial with pre-specified margin.
Ghosh, Samiran; Ghosh, Santu; Tiwari, Ram C
2016-02-28
Non-inferiority trials are becoming increasingly popular for comparative effectiveness research. However, inclusion of the placebo arm, whenever possible, gives rise to a three-arm trial which has lesser burdensome assumptions than a standard two-arm non-inferiority trial. Most of the past developments in a three-arm trial consider defining a pre-specified fraction of unknown effect size of reference drug, that is, without directly specifying a fixed non-inferiority margin. However, in some recent developments, a more direct approach is being considered with pre-specified fixed margin albeit in the frequentist setup. Bayesian paradigm provides a natural path to integrate historical and current trials' information via sequential learning. In this paper, we propose a Bayesian approach for simultaneous testing of non-inferiority and assay sensitivity in a three-arm trial with normal responses. For the experimental arm, in absence of historical information, non-informative priors are assumed under two situations, namely when (i) variance is known and (ii) variance is unknown. A Bayesian decision criteria is derived and compared with the frequentist method using simulation studies. Finally, several published clinical trial examples are reanalyzed to demonstrate the benefit of the proposed procedure. Copyright © 2015 John Wiley & Sons, Ltd.
Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.
2011-01-01
Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2014-11-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Toribo, S.G.; Gray, B.R.; Liang, S.
2011-01-01
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.
Shen, Yanna; Cooper, Gregory F
2012-09-01
This paper investigates Bayesian modeling of known and unknown causes of events in the context of disease-outbreak detection. We introduce a multivariate Bayesian approach that models multiple evidential features of every person in the population. This approach models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A contribution of this paper is that it introduces a multivariate Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has general applicability in domains where the space of known causes is incomplete. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
A Gibbs sampler for Bayesian analysis of site-occupancy data
Dorazio, Robert M.; Rodriguez, Daniel Taylor
2012-01-01
1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.
Microcomputer Network for Computerized Adaptive Testing (CAT)
1984-03-01
PRDC TR 84-33 \\Q.�d-33- \\ MICROCOMPUTER NETWOJlt FOR COMPUTERIZED ADAPTIVE TESTING ( CAT ) Baldwin Quan Thomas A . Park Gary Sandahl John H...ACCEIIION NO NPRDC TR 84-33 4. TITLE (-d Sul>tlllo) MICROCOMP UTER NETWORK FOR COMPUTERIZED ADA PTIVE TESTING ( CAT ) 1. Q B. uan T. A . Park...adaptive testing ( CAT ) Bayesian sequential testing 20. ABSTitACT (Continuo on ro•••• aide II noco .. _, _., ld-tlly ,.,. t.loclt _._.) DO Computerized
Carbon Nanotube Growth Rate Regression using Support Vector Machines and Artificial Neural Networks
2014-03-27
intensity D peak. Reprinted with permission from [38]. The SVM classifier is trained using custom written Java code leveraging the Sequential Minimal...Society Encog is a machine learning framework for Java , C++ and .Net applications that supports Bayesian Networks, Hidden Markov Models, SVMs and ANNs [13...SVM classifiers are trained using Weka libraries and leveraging custom written Java code. The data set is created as an Attribute Relationship File
2010-10-01
bodies becomes greater as surface as- perities wear down (Hutchings, 1992). We characterize friction damage by a change in the friction coefficient...points are such a set, and satisfy an additional constraint in which the skew ( third moment) is minimized, which reduces the average error for a...On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197–208. Hutchings, I. M. (1992). Tribology : friction
A hidden Markov model for decoding and the analysis of replay in spike trains.
Box, Marc; Jones, Matt W; Whiteley, Nick
2016-12-01
We present a hidden Markov model that describes variation in an animal's position associated with varying levels of activity in action potential spike trains of individual place cell neurons. The model incorporates a coarse-graining of position, which we find to be a more parsimonious description of the system than other models. We use a sequential Monte Carlo algorithm for Bayesian inference of model parameters, including the state space dimension, and we explain how to estimate position from spike train observations (decoding). We obtain greater accuracy over other methods in the conditions of high temporal resolution and small neuronal sample size. We also present a novel, model-based approach to the study of replay: the expression of spike train activity related to behaviour during times of motionlessness or sleep, thought to be integral to the consolidation of long-term memories. We demonstrate how we can detect the time, information content and compression rate of replay events in simulated and real hippocampal data recorded from rats in two different environments, and verify the correlation between the times of detected replay events and of sharp wave/ripples in the local field potential.
Dynamic Bayesian network modeling for longitudinal brain morphometry
Chen, Rong; Resnick, Susan M; Davatzikos, Christos; Herskovits, Edward H
2011-01-01
Identifying interactions among brain regions from structural magnetic-resonance images presents one of the major challenges in computational neuroanatomy. We propose a Bayesian data-mining approach to the detection of longitudinal morphological changes in the human brain. Our method uses a dynamic Bayesian network to represent evolving inter-regional dependencies. The major advantage of dynamic Bayesian network modeling is that it can represent complicated interactions among temporal processes. We validated our approach by analyzing a simulated atrophy study, and found that this approach requires only a small number of samples to detect the ground-truth temporal model. We further applied dynamic Bayesian network modeling to a longitudinal study of normal aging and mild cognitive impairment — the Baltimore Longitudinal Study of Aging. We found that interactions among regional volume-change rates for the mild cognitive impairment group are different from those for the normal-aging group. PMID:21963916
Multilevel Sequential Monte Carlo Samplers for Normalizing Constants
Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...
2017-08-24
This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less
NASA Astrophysics Data System (ADS)
Alehosseini, Ali; A. Hejazi, Maryam; Mokhtari, Ghassem; B. Gharehpetian, Gevork; Mohammadi, Mohammad
2015-06-01
In this paper, the Bayesian classifier is used to detect and classify the radial deformation and axial displacement of transformer windings. The proposed method is tested on a model of transformer for different volumes of radial deformation and axial displacement. In this method, ultra-wideband (UWB) signal is sent to the simplified model of the transformer winding. The received signal from the winding model is recorded and used for training and testing of Bayesian classifier in different axial displacement and radial deformation states of the winding. It is shown that the proposed method has a good accuracy to detect and classify the axial displacement and radial deformation of the winding.
NASA Astrophysics Data System (ADS)
Zaidi, W. H.; Faber, W. R.; Hussein, I. I.; Mercurio, M.; Roscoe, C. W. T.; Wilkins, M. P.
One of the most challenging problems in treating space debris is the characterization of the orbit of a newly detected and uncorrelated measurement. The admissible region is defined as the set of physically acceptable orbits (i.e. orbits with negative energies) consistent with one or more measurements of a Resident Space Object (RSO). Given additional constraints on the orbital semi-major axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a Probabilistic Admissible Region (PAR), a concept introduced in 2014 as a Monte Carlo uncertainty representation approach using topocentric spherical coordinates. Ultimately, a PAR can be used to initialize a sequential Bayesian estimator and to prioritize orbital propagations in a multiple hypothesis tracking framework such as Finite Set Statistics (FISST). To date, measurements used to build the PAR have been collected concurrently and by the same sensor. In this paper, we allow measurements to have different time stamps. We also allow for non-collocated sensor collections; optical data can be collected by one sensor at a given time and radar data collected by another sensor located elsewhere. We then revisit first principles to link asynchronous optical and radar measurements using both the conservation of specific orbital energy and specific orbital angular momentum. The result from the proposed algorithm is an implicit-Bayesian and non-Gaussian representation of orbital state uncertainty.
Bayesian methods including nonrandomized study data increased the efficiency of postlaunch RCTs.
Schmidt, Amand F; Klugkist, Irene; Klungel, Olaf H; Nielen, Mirjam; de Boer, Anthonius; Hoes, Arno W; Groenwold, Rolf H H
2015-04-01
Findings from nonrandomized studies on safety or efficacy of treatment in patient subgroups may trigger postlaunch randomized clinical trials (RCTs). In the analysis of such RCTs, results from nonrandomized studies are typically ignored. This study explores the trade-off between bias and power of Bayesian RCT analysis incorporating information from nonrandomized studies. A simulation study was conducted to compare frequentist with Bayesian analyses using noninformative and informative priors in their ability to detect interaction effects. In simulated subgroups, the effect of a hypothetical treatment differed between subgroups (odds ratio 1.00 vs. 2.33). Simulations varied in sample size, proportions of the subgroups, and specification of the priors. As expected, the results for the informative Bayesian analyses were more biased than those from the noninformative Bayesian analysis or frequentist analysis. However, because of a reduction in posterior variance, informative Bayesian analyses were generally more powerful to detect an effect. In scenarios where the informative priors were in the opposite direction of the RCT data, type 1 error rates could be 100% and power 0%. Bayesian methods incorporating data from nonrandomized studies can meaningfully increase power of interaction tests in postlaunch RCTs. Copyright © 2015 Elsevier Inc. All rights reserved.
Holmes, Tyson H.; Lewis, David B.
2014-01-01
Bayesian estimation techniques offer a systematic and quantitative approach for synthesizing data drawn from the literature to model immunological systems. As detailed here, the practitioner begins with a theoretical model and then sequentially draws information from source data sets and/or published findings to inform estimation of model parameters. Options are available to weigh these various sources of information differentially per objective measures of their corresponding scientific strengths. This approach is illustrated in depth through a carefully worked example for a model of decline in T-cell receptor excision circle content of peripheral T cells during development and aging. Estimates from this model indicate that 21 years of age is plausible for the developmental timing of mean age of onset of decline in T-cell receptor excision circle content of peripheral T cells. PMID:25179832
Application of Bayesian Methods for Detecting Fraudulent Behavior on Tests
ERIC Educational Resources Information Center
Sinharay, Sandip
2018-01-01
Producers and consumers of test scores are increasingly concerned about fraudulent behavior before and during the test. There exist several statistical or psychometric methods for detecting fraudulent behavior on tests. This paper provides a review of the Bayesian approaches among them. Four hitherto-unpublished real data examples are provided to…
Varughese, Eunice A.; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer S; Fout, G. Shay; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.; Keely, Scott P
2017-01-01
incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters.
Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.
2015-01-01
The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888
Probabilistic Model for Untargeted Peak Detection in LC-MS Using Bayesian Statistics.
Woldegebriel, Michael; Vivó-Truyols, Gabriel
2015-07-21
We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography-mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing.
Exoplanet Biosignatures: Future Directions
Bains, William; Cronin, Leroy; DasSarma, Shiladitya; Danielache, Sebastian; Domagal-Goldman, Shawn; Kacar, Betul; Kiang, Nancy Y.; Lenardic, Adrian; Reinhard, Christopher T.; Moore, William; Schwieterman, Edward W.; Shkolnik, Evgenya L.; Smith, Harrison B.
2018-01-01
Abstract We introduce a Bayesian method for guiding future directions for detection of life on exoplanets. We describe empirical and theoretical work necessary to place constraints on the relevant likelihoods, including those emerging from better understanding stellar environment, planetary climate and geophysics, geochemical cycling, the universalities of physics and chemistry, the contingencies of evolutionary history, the properties of life as an emergent complex system, and the mechanisms driving the emergence of life. We provide examples for how the Bayesian formalism could guide future search strategies, including determining observations to prioritize or deciding between targeted searches or larger lower resolution surveys to generate ensemble statistics and address how a Bayesian methodology could constrain the prior probability of life with or without a positive detection. Key Words: Exoplanets—Biosignatures—Life detection—Bayesian analysis. Astrobiology 18, 779–824. PMID:29938538
Multiuser signal detection using sequential decoding
NASA Astrophysics Data System (ADS)
Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.
1990-05-01
The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
On the uncertainty in single molecule fluorescent lifetime and energy emission measurements
NASA Technical Reports Server (NTRS)
Brown, Emery N.; Zhang, Zhenhua; Mccollom, Alex D.
1995-01-01
Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least square methods agree and are optimal when the number of detected photons is large however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67% of those can be noise and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous poisson processes, we derive the exact joint arrival time probably density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. the ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background nose and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.
On the Uncertainty in Single Molecule Fluorescent Lifetime and Energy Emission Measurements
NASA Technical Reports Server (NTRS)
Brown, Emery N.; Zhang, Zhenhua; McCollom, Alex D.
1996-01-01
Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least squares methods agree and are optimal when the number of detected photons is large, however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67 percent of those can be noise, and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous Poisson processes, we derive the exact joint arrival time probability density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. The ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background noise and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.
Source Detection with Bayesian Inference on ROSAT All-Sky Survey Data Sample
NASA Astrophysics Data System (ADS)
Guglielmetti, F.; Voges, W.; Fischer, R.; Boese, G.; Dose, V.
2004-07-01
We employ Bayesian inference for the joint estimation of sources and background on ROSAT All-Sky Survey (RASS) data. The probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS). Background maps were estimated in a single step together with the detection of sources without pixel censoring. Consistent uncertainties of background and sources are provided. The source probability is evaluated for single pixels as well as for pixel domains to enhance source detection of weak and extended sources.
Overlapping community detection in weighted networks via a Bayesian approach
NASA Astrophysics Data System (ADS)
Chen, Yi; Wang, Xiaolong; Xiang, Xin; Tang, Buzhou; Chen, Qingcai; Fan, Shixi; Bu, Junzhao
2017-02-01
Complex networks as a powerful way to represent complex systems have been widely studied during the past several years. One of the most important tasks of complex network analysis is to detect communities embedded in networks. In the real world, weighted networks are very common and may contain overlapping communities where a node is allowed to belong to multiple communities. In this paper, we propose a novel Bayesian approach, called the Bayesian mixture network (BMN) model, to detect overlapping communities in weighted networks. The advantages of our method are (i) providing soft-partition solutions in weighted networks; (ii) providing soft memberships, which quantify 'how strongly' a node belongs to a community. Experiments on a large number of real and synthetic networks show that our model has the ability in detecting overlapping communities in weighted networks and is competitive with other state-of-the-art models at shedding light on community partition.
Diagnosis and Management of Deployed Adults with Chest Pain.
1983-01-31
sequential Bayesian model using the likelihood ratios that he supplied to the United States Navy. Because the results of his study were not provided...THE EKG A. All BWH Patients ( n = 899 ) MODEL MI NO MI MI 179 20 199 TRUTH NO MI 248 452 700 427 472 899 SENSITIVITY = 79 = .90199 SPECIFICITY...WITH THE EKG C. BWH Males, < 60 years old, ( n = 250) MODEL MI NO MI MI 49 4 53 TRUTH NO MI 59 138 197 108 142 250 SENSITIVITY = - = .92 53 SPECIFICITY
Software Health Management with Bayesian Networks
NASA Technical Reports Server (NTRS)
Mengshoel, Ole; Schumann, JOhann
2011-01-01
Most modern aircraft as well as other complex machinery is equipped with diagnostics systems for its major subsystems. During operation, sensors provide important information about the subsystem (e.g., the engine) and that information is used to detect and diagnose faults. Most of these systems focus on the monitoring of a mechanical, hydraulic, or electromechanical subsystem of the vehicle or machinery. Only recently, health management systems that monitor software have been developed. In this paper, we will discuss our approach of using Bayesian networks for Software Health Management (SWHM). We will discuss SWHM requirements, which make advanced reasoning capabilities for the detection and diagnosis important. Then we will present our approach to using Bayesian networks for the construction of health models that dynamically monitor a software system and is capable of detecting and diagnosing faults.
A novel approach for pilot error detection using Dynamic Bayesian Networks.
Saada, Mohamad; Meng, Qinggang; Huang, Tingwen
2014-06-01
In the last decade Dynamic Bayesian Networks (DBNs) have become one type of the most attractive probabilistic modelling framework extensions of Bayesian Networks (BNs) for working under uncertainties from a temporal perspective. Despite this popularity not many researchers have attempted to study the use of these networks in anomaly detection or the implications of data anomalies on the outcome of such models. An abnormal change in the modelled environment's data at a given time, will cause a trailing chain effect on data of all related environment variables in current and consecutive time slices. Albeit this effect fades with time, it still can have an ill effect on the outcome of such models. In this paper we propose an algorithm for pilot error detection, using DBNs as the modelling framework for learning and detecting anomalous data. We base our experiments on the actions of an aircraft pilot, and a flight simulator is created for running the experiments. The proposed anomaly detection algorithm has achieved good results in detecting pilot errors and effects on the whole system.
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.
Bayesian least squares deconvolution
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Multilevel Sequential2 Monte Carlo for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth
2018-09-01
The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.
SSME propellant path leak detection real-time
NASA Technical Reports Server (NTRS)
Crawford, R. A.; Smith, L. M.
1994-01-01
Included are four documents that outline the technical aspects of the research performed on NASA Grant NAG8-140: 'A System for Sequential Step Detection with Application to Video Image Processing'; 'Leak Detection from the SSME Using Sequential Image Processing'; 'Digital Image Processor Specifications for Real-Time SSME Leak Detection'; and 'A Color Change Detection System for Video Signals with Applications to Spectral Analysis of Rocket Engine Plumes'.
Human Inferences about Sequences: A Minimal Transition Probability Model
2016-01-01
The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Bayesian Inference for Functional Dynamics Exploring in fMRI Data.
Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing
2016-01-01
This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.
Bhoomiboonchoo, Piraya; Nisalak, Ananda; Chansatiporn, Natkamol; Yoon, In-Kyu; Kalayanarooj, Siripen; Thipayamongkolgul, Mathuros; Endy, Timothy; Rothman, Alan L; Green, Sharone; Srikiatkhachorn, Anon; Buddhari, Darunee; Mammen, Mammen P; Gibbons, Robert V
2015-03-14
The effect of prior dengue virus (DENV) exposure on subsequent heterologous infection can be beneficial or detrimental depending on many factors including timing of infection. We sought to evaluate this effect by examining a large database of DENV infections captured by both active and passive surveillance encompassing a wide clinical spectrum of disease. We evaluated datasets from 17 years of hospital-based passive surveillance and nine years of cohort studies, including clinical and subclinical DENV infections, to assess the outcomes of sequential heterologous infections. Chi square or Fisher's exact test was used to compare proportions of infection outcomes such as disease severity; ANOVA was used for continuous variables. Multivariate logistic regression was used to assess risk factors for infection outcomes. Of 38,740 DENV infections, two or more infections were detected in 502 individuals; 14 had three infections. The mean ages at the time of the first and second detected infections were 7.6 ± 3.0 and 11.2 ± 3.0 years. The shortest time between sequential infections was 66 days. A longer time interval between sequential infections was associated with dengue hemorrhagic fever (DHF) in the second detected infection (OR 1.3, 95% CI 1.2-1.4). All possible sequential serotype pairs were observed among 201 subjects with DHF at the second detected infection, except DENV-4 followed by DENV-3. Among DENV infections detected in cohort subjects by active study surveillance and subsequent non-study hospital-based passive surveillance, hospitalization at the first detected infection increased the likelihood of hospitalization at the second detected infection. Increasing time between sequential DENV infections was associated with greater severity of the second detected infection, supporting the role of heterotypic immunity in both protection and enhancement. Hospitalization was positively associated between the first and second detected infections, suggesting a possible predisposition in some individuals to more severe dengue disease.
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
Joint passive radar tracking and target classification using radar cross section
NASA Astrophysics Data System (ADS)
Herman, Shawn M.
2004-01-01
We present a recursive Bayesian solution for the problem of joint tracking and classification of airborne targets. In our system, we allow for complications due to multiple targets, false alarms, and missed detections. More importantly, though, we utilize the full benefit of a joint approach by implementing our tracker using an aerodynamically valid flight model that requires aircraft-specific coefficients such as wing area and vehicle mass, which are provided by our classifier. A key feature that bridges the gap between tracking and classification is radar cross section (RCS). By modeling the true deterministic relationship that exists between RCS and target aspect, we are able to gain both valuable class information and an estimate of target orientation. However, the lack of a closed-form relationship between RCS and target aspect prevents us from using the Kalman filter or its variants. Instead, we rely upon a sequential Monte Carlo-based approach known as particle filtering. In addition to allowing us to include RCS as a measurement, the particle filter also simplifies the implementation of our nonlinear non-Gaussian flight model.
Joint passive radar tracking and target classification using radar cross section
NASA Astrophysics Data System (ADS)
Herman, Shawn M.
2003-12-01
We present a recursive Bayesian solution for the problem of joint tracking and classification of airborne targets. In our system, we allow for complications due to multiple targets, false alarms, and missed detections. More importantly, though, we utilize the full benefit of a joint approach by implementing our tracker using an aerodynamically valid flight model that requires aircraft-specific coefficients such as wing area and vehicle mass, which are provided by our classifier. A key feature that bridges the gap between tracking and classification is radar cross section (RCS). By modeling the true deterministic relationship that exists between RCS and target aspect, we are able to gain both valuable class information and an estimate of target orientation. However, the lack of a closed-form relationship between RCS and target aspect prevents us from using the Kalman filter or its variants. Instead, we rely upon a sequential Monte Carlo-based approach known as particle filtering. In addition to allowing us to include RCS as a measurement, the particle filter also simplifies the implementation of our nonlinear non-Gaussian flight model.
System and method for detecting components of a mixture including tooth elements for alignment
Sommer, Gregory Jon; Schaff, Ulrich Y.
2016-11-22
Examples are described including assay platforms having tooth elements. An impinging element may sequentially engage tooth elements on the assay platform to sequentially align corresponding detection regions with a detection unit. In this manner, multiple measurements may be made of detection regions on the assay platform without necessarily requiring the starting and stopping of a motor.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.
2017-01-01
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
Chadha, Neil K; Papsin, Blake C; Jiwani, Salima; Gordon, Karen A
2011-09-01
To measure speech detection in noise performance for children with bilateral cochlear implants (BiCI), to compare performance in children with simultaneous implant versus those with sequential implant, and to compare performance to normal-hearing children. Prospective cohort study. Tertiary academic pediatric center. Children with early-onset bilateral deafness and 2-year BiCI experience, comprising the "sequential" group (>2 yr interimplantation delay, n = 12) and "simultaneous group" (no interimplantation delay, n = 10) and normal-hearing controls (n = 8). Thresholds to speech detection (at 0-degree azimuth) were measured with noise at 0-degree azimuth or ± 90-degree azimuth. Spatial unmasking (SU) as the noise condition changed from 0-degree azimuth to ± 90-degree azimuth and binaural summation advantage (BSA) of 2 over 1 CI. Speech detection in noise was significantly poorer than controls for both BiCI groups (p < 0.0001). However, the SU in the simultaneous group approached levels found in normal controls (7.2 ± 0.6 versus 8.6 ± 0.6 dB, p > 0.05) and was significantly better than that in the sequential group (3.9 ± 0.4 dB, p < 0.05). Spatial unmasking was unaffected by the side of noise presentation in the simultaneous group but, in the sequential group, was significantly better when noise was moved to the second rather than the first implanted ear (4.8 ± 0.5 versus 3.0 ± 0.4 dB, p < 0.05). This was consistent with a larger BSA from the sequential group's second rather than first CI. Children with simultaneously implanted BiCI demonstrated an advantage over children with sequential implant by using spatial cues to improve speech detection in noise.
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-12-01
The onset of muscle activity, as measured by electromyography (EMG), is a commonly applied metric in biomechanics. Intramuscular EMG is often used to examine deep musculature and there are currently no studies examining the effectiveness of algorithms for intramuscular EMG onset. The present study examines standard surface EMG onset algorithms (linear envelope, Teager-Kaiser Energy Operator, and sample entropy) and novel algorithms (time series mean-variance analysis, sequential/batch processing with parametric and nonparametric methods, and Bayesian changepoint analysis). Thirteen male and 5 female subjects had intramuscular EMG collected during isolated biceps brachii and vastus lateralis contractions, resulting in 103 trials. EMG onset was visually determined twice by 3 blinded reviewers. Since the reliability of visual onset was high (ICC (1,1) : 0.92), the mean of the 6 visual assessments was contrasted with the algorithmic approaches. Poorly performing algorithms were stepwise eliminated via (1) root mean square error analysis, (2) algorithm failure to identify onset/premature onset, (3) linear regression analysis, and (4) Bland-Altman plots. The top performing algorithms were all based on Bayesian changepoint analysis of rectified EMG and were statistically indistinguishable from visual analysis. Bayesian changepoint analysis has the potential to produce more reliable, accurate, and objective intramuscular EMG onset results than standard methodologies.
Pan, Haitao; Yuan, Ying; Xia, Jielai
2017-11-01
A biosimilar refers to a follow-on biologic intended to be approved for marketing based on biosimilarity to an existing patented biological product (i.e., the reference product). To develop a biosimilar product, it is essential to demonstrate biosimilarity between the follow-on biologic and the reference product, typically through two-arm randomization trials. We propose a Bayesian adaptive design for trials to evaluate biosimilar products. To take advantage of the abundant historical data on the efficacy of the reference product that is typically available at the time a biosimilar product is developed, we propose the calibrated power prior, which allows our design to adaptively borrow information from the historical data according to the congruence between the historical data and the new data collected from the current trial. We propose a new measure, the Bayesian biosimilarity index, to measure the similarity between the biosimilar and the reference product. During the trial, we evaluate the Bayesian biosimilarity index in a group sequential fashion based on the accumulating interim data, and stop the trial early once there is enough information to conclude or reject the similarity. Extensive simulation studies show that the proposed design has higher power than traditional designs. We applied the proposed design to a biosimilar trial for treating rheumatoid arthritis.
A Bayesian Framework for Reliability Analysis of Spacecraft Deployments
NASA Technical Reports Server (NTRS)
Evans, John W.; Gallo, Luis; Kaminsky, Mark
2012-01-01
Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.
Mixed Signal Learning by Spike Correlation Propagation in Feedback Inhibitory Circuits
Hiratani, Naoki; Fukai, Tomoki
2015-01-01
The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory. PMID:25910189
Posterior Predictive Model Checking in Bayesian Networks
ERIC Educational Resources Information Center
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
Bayesian analysis of caustic-crossing microlensing events
NASA Astrophysics Data System (ADS)
Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.
2010-06-01
Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.
Is probabilistic bias analysis approximately Bayesian?
MacLehose, Richard F.; Gustafson, Paul
2011-01-01
Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311
A Survey of Methods for Computing Best Estimates of Endoatmospheric and Exoatmospheric Trajectories
NASA Technical Reports Server (NTRS)
Bernard, William P.
2018-01-01
Beginning with the mathematical prediction of planetary orbits in the early seventeenth century up through the most recent developments in sensor fusion methods, many techniques have emerged that can be employed on the problem of endo and exoatmospheric trajectory estimation. Although early methods were ad hoc, the twentieth century saw the emergence of many systematic approaches to estimation theory that produced a wealth of useful techniques. The broad genesis of estimation theory has resulted in an equally broad array of mathematical principles, methods and vocabulary. Among the fundamental ideas and methods that are briefly touched on are batch and sequential processing, smoothing, estimation, and prediction, sensor fusion, sensor fusion architectures, data association, Bayesian and non Bayesian filtering, the family of Kalman filters, models of the dynamics of the phases of a rocket's flight, and asynchronous, delayed, and asequent data. Along the way, a few trajectory estimation issues are addressed and much of the vocabulary is defined.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
Computational Prediction and Validation of an Expert's Evaluation of Chemical Probes
Litterman, Nadia K.; Lipinski, Christopher A.; Bunin, Barry A.; Ekins, Sean
2016-01-01
In a decade with over half a billion dollars of investment, more than 300 chemical probes have been identified to have biological activity through NIH funded screening efforts. We have collected the evaluations of an experienced medicinal chemist on the likely chemistry quality of these probes based on a number of criteria including literature related to the probe and potential chemical reactivity. Over 20% of these probes were found to be undesirable. Analysis of the molecular properties of these compounds scored as desirable suggested higher pKa, molecular weight, heavy atom count and rotatable bond number. We were particularly interested whether the human evaluation aspect of medicinal chemistry due diligence could be computationally predicted. We used a process of sequential Bayesian model building and iterative testing as we included additional probes. Following external validation of these methods and comparing different machine learning methods we identified Bayesian models with accuracy comparable to other measures of drug-likeness and filtering rules created to date. PMID:25244007
Bayesian hierarchical modeling for detecting safety signals in clinical trials.
Xia, H Amy; Ma, Haijun; Carlin, Bradley P
2011-09-01
Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
How Much Can We Learn from a Single Chromatographic Experiment? A Bayesian Perspective.
Wiczling, Paweł; Kaliszan, Roman
2016-01-05
In this work, we proposed and investigated a Bayesian inference procedure to find the desired chromatographic conditions based on known analyte properties (lipophilicity, pKa, and polar surface area) using one preliminary experiment. A previously developed nonlinear mixed effect model was used to specify the prior information about a new analyte with known physicochemical properties. Further, the prior (no preliminary data) and posterior predictive distribution (prior + one experiment) were determined sequentially to search towards the desired separation. The following isocratic high-performance reversed-phase liquid chromatographic conditions were sought: (1) retention time of a single analyte within the range of 4-6 min and (2) baseline separation of two analytes with retention times within the range of 4-10 min. The empirical posterior Bayesian distribution of parameters was estimated using the "slice sampling" Markov Chain Monte Carlo (MCMC) algorithm implemented in Matlab. The simulations with artificial analytes and experimental data of ketoprofen and papaverine were used to test the proposed methodology. The simulation experiment showed that for a single and two randomly selected analytes, there is 97% and 74% probability of obtaining a successful chromatogram using none or one preliminary experiment. The desired separation for ketoprofen and papaverine was established based on a single experiment. It was confirmed that the search for a desired separation rarely requires a large number of chromatographic analyses at least for a simple optimization problem. The proposed Bayesian-based optimization scheme is a powerful method of finding a desired chromatographic separation based on a small number of preliminary experiments.
A Bayesian CUSUM plot: Diagnosing quality of treatment.
Rosthøj, Steen; Jacobsen, Rikke-Line
2017-12-01
To present a CUSUM plot based on Bayesian diagnostic reasoning displaying evidence in favour of "healthy" rather than "sick" quality of treatment (QOT), and to demonstrate a technique using Kaplan-Meier survival curves permitting application to case series with ongoing follow-up. For a case series with known final outcomes: Consider each case a diagnostic test of good versus poor QOT (expected vs. increased failure rates), determine the likelihood ratio (LR) of the observed outcome, convert LR to weight taking log to base 2, and add up weights sequentially in a plot showing how many times odds in favour of good QOT have been doubled. For a series with observed survival times and an expected survival curve: Divide the curve into time intervals, determine "healthy" and specify "sick" risks of failure in each interval, construct a "sick" survival curve, determine the LR of survival or failure at the given observation times, convert to weights, and add up. The Bayesian plot was applied retrospectively to 39 children with acute lymphoblastic leukaemia with completed follow-up, using Nordic collaborative results as reference, showing equal odds between good and poor QOT. In the ongoing treatment trial, with 22 of 37 children still at risk for event, QOT has been monitored with average survival curves as reference, odds so far favoring good QOT 2:1. QOT in small patient series can be assessed with a Bayesian CUSUM plot, retrospectively when all treatment outcomes are known, but also in ongoing series with unfinished follow-up. © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Holliger, Klaus
2015-08-01
We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.
A dynamic model of reasoning and memory.
Hawkins, Guy E; Hayes, Brett K; Heit, Evan
2016-02-01
Previous models of category-based induction have neglected how the process of induction unfolds over time. We conceive of induction as a dynamic process and provide the first fine-grained examination of the distribution of response times observed in inductive reasoning. We used these data to develop and empirically test the first major quantitative modeling scheme that simultaneously accounts for inductive decisions and their time course. The model assumes that knowledge of similarity relations among novel test probes and items stored in memory drive an accumulation-to-bound sequential sampling process: Test probes with high similarity to studied exemplars are more likely to trigger a generalization response, and more rapidly, than items with low exemplar similarity. We contrast data and model predictions for inductive decisions with a recognition memory task using a common stimulus set. Hierarchical Bayesian analyses across 2 experiments demonstrated that inductive reasoning and recognition memory primarily differ in the threshold to trigger a decision: Observers required less evidence to make a property generalization judgment (induction) than an identity statement about a previously studied item (recognition). Experiment 1 and a condition emphasizing decision speed in Experiment 2 also found evidence that inductive decisions use lower quality similarity-based information than recognition. The findings suggest that induction might represent a less cautious form of recognition. We conclude that sequential sampling models grounded in exemplar-based similarity, combined with hierarchical Bayesian analysis, provide a more fine-grained and informative analysis of the processes involved in inductive reasoning than is possible solely through examination of choice data. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Struchen, R; Vial, F; Andersson, M G
2017-04-26
Delayed reporting of health data may hamper the early detection of infectious diseases in surveillance systems. Furthermore, combining multiple data streams, e.g. aiming at improving a system's sensitivity, can be challenging. In this study, we used a Bayesian framework where the result is presented as the value of evidence, i.e. the likelihood ratio for the evidence under outbreak versus baseline conditions. Based on a historical data set of routinely collected cattle mortality events, we evaluated outbreak detection performance (sensitivity, time to detection, in-control run length) under the Bayesian approach among three scenarios: presence of delayed data reporting, but not accounting for it; presence of delayed data reporting accounted for; and absence of delayed data reporting (i.e. an ideal system). Performance on larger and smaller outbreaks was compared with a classical approach, considering syndromes separately or combined. We found that the Bayesian approach performed better than the classical approach, especially for the smaller outbreaks. Furthermore, the Bayesian approach performed similarly well in the scenario where delayed reporting was accounted for to the scenario where it was absent. We argue that the value of evidence framework may be suitable for surveillance systems with multiple syndromes and delayed reporting of data.
A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.
Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F
2016-01-01
Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
Axial superresolution via multiangle TIRF microscopy with sequential imaging and photobleaching
Fu, Yan; Winter, Peter W.; Rojas, Raul; Wang, Victor; McAuliffe, Matthew; Patterson, George H.
2016-01-01
We report superresolution optical sectioning using a multiangle total internal reflection fluorescence (TIRF) microscope. TIRF images were constructed from several layers within a normal TIRF excitation zone by sequentially imaging and photobleaching the fluorescent molecules. The depth of the evanescent wave at different layers was altered by tuning the excitation light incident angle. The angle was tuned from the highest (the smallest TIRF depth) toward the critical angle (the largest TIRF depth) to preferentially photobleach fluorescence from the lower layers and allow straightforward observation of deeper structures without masking by the brighter signals closer to the coverglass. Reconstruction of the TIRF images enabled 3D imaging of biological samples with 20-nm axial resolution. Two-color imaging of epidermal growth factor (EGF) ligand and clathrin revealed the dynamics of EGF-activated clathrin-mediated endocytosis during internalization. Furthermore, Bayesian analysis of images collected during the photobleaching step of each plane enabled lateral superresolution (<100 nm) within each of the sections. PMID:27044072
Hip fracture in the elderly: a re-analysis of the EPIDOS study with causal Bayesian networks.
Caillet, Pascal; Klemm, Sarah; Ducher, Michel; Aussem, Alexandre; Schott, Anne-Marie
2015-01-01
Hip fractures commonly result in permanent disability, institutionalization or death in elderly. Existing hip-fracture predicting tools are underused in clinical practice, partly due to their lack of intuitive interpretation. By use of a graphical layer, Bayesian network models could increase the attractiveness of fracture prediction tools. Our aim was to study the potential contribution of a causal Bayesian network in this clinical setting. A logistic regression was performed as a standard control approach to check the robustness of the causal Bayesian network approach. EPIDOS is a multicenter study, conducted in an ambulatory care setting in five French cities between 1992 and 1996 and updated in 2010. The study included 7598 women aged 75 years or older, in which fractures were assessed quarterly during 4 years. A causal Bayesian network and a logistic regression were performed on EPIDOS data to describe major variables involved in hip fractures occurrences. Both models had similar association estimations and predictive performances. They detected gait speed and mineral bone density as variables the most involved in the fracture process. The causal Bayesian network showed that gait speed and bone mineral density were directly connected to fracture and seem to mediate the influence of all the other variables included in our model. The logistic regression approach detected multiple interactions involving psychotropic drug use, age and bone mineral density. Both approaches retrieved similar variables as predictors of hip fractures. However, Bayesian network highlighted the whole web of relation between the variables involved in the analysis, suggesting a possible mechanism leading to hip fracture. According to the latter results, intervention focusing concomitantly on gait speed and bone mineral density may be necessary for an optimal prevention of hip fracture occurrence in elderly people.
Kan, Shun-Li; Yuan, Zhi-Fang; Ning, Guang-Zhi; Liu, Fei-Fei; Sun, Jing-Cheng; Feng, Shi-Qing
2016-11-01
Cervical disc arthroplasty (CDA) has been designed as a substitute for anterior cervical discectomy and fusion (ACDF) in the treatment of symptomatic cervical disc disease (CDD). Several researchers have compared CDA with ACDF for the treatment of symptomatic CDD; however, the findings of these studies are inconclusive. Using recently published evidence, this meta-analysis was conducted to further verify the benefits and harms of using CDA for treatment of symptomatic CDD. Relevant trials were identified by searching the PubMed, EMBASE, and Cochrane Library databases. Outcomes were reported as odds ratio or standardized mean difference. Both traditional frequentist and Bayesian approaches were used to synthesize evidence within random-effects models. Trial sequential analysis (TSA) was applied to test the robustness of our findings and obtain more conservative estimates. Nineteen trials were included. The findings of this meta-analysis demonstrated better overall, neck disability index (NDI), and neurological success; lower NDI and neck and arm pain scores; higher 36-Item Short Form Health Survey (SF-36) Physical Component Summary (PCS) and Mental Component Summary (MCS) scores; more patient satisfaction; greater range of motion at the operative level; and fewer secondary surgical procedures (all P < 0.05) in the CDA group compared with the ACDF group. CDA was not significantly different from ACDF in the rate of adverse events (P > 0.05). TSA of overall success suggested that the cumulative z-curve crossed both the conventional boundary and the trial sequential monitoring boundary for benefit, indicating sufficient and conclusive evidence had been ascertained. For treating symptomatic CDD, CDA was superior to ACDF in terms of overall, NDI, and neurological success; NDI and neck and arm pain scores; SF-36 PCS and MCS scores; patient satisfaction; ROM at the operative level; and secondary surgical procedures rate. Additionally, there was no significant difference between CDA and ACDF in the rate of adverse events. However, as the CDA procedure is a relatively newer operative technique, long-term results and evaluation are necessary before CDA is routinely used in clinical practice. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Statistical characteristics of the sequential detection of signals in correlated noise
NASA Astrophysics Data System (ADS)
Averochkin, V. A.; Baranov, P. E.
1985-10-01
A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.
Learning Negotiation Policies Using IB3 and Bayesian Networks
NASA Astrophysics Data System (ADS)
Nalepa, Gislaine M.; Ávila, Bráulio C.; Enembreck, Fabrício; Scalabrin, Edson E.
This paper presents an intelligent offer policy in a negotiation environment, in which each agent involved learns the preferences of its opponent in order to improve its own performance. Each agent must also be able to detect drifts in the opponent's preferences so as to quickly adjust itself to their new offer policy. For this purpose, two simple learning techniques were first evaluated: (i) based on instances (IB3) and (ii) based on Bayesian Networks. Additionally, as its known that in theory group learning produces better results than individual/single learning, the efficiency of IB3 and Bayesian classifier groups were also analyzed. Finally, each decision model was evaluated in moments of concept drift, being the drift gradual, moderate or abrupt. Results showed that both groups of classifiers were able to effectively detect drifts in the opponent's preferences.
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
NASA Astrophysics Data System (ADS)
Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme
2016-04-01
We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).
NASA Astrophysics Data System (ADS)
Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli
In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xingyuan; Murakami, Haruko; Hahn, Melanie S.
2012-06-01
Tracer testing under natural or forced gradient flow holds the potential to provide useful information for characterizing subsurface properties, through monitoring, modeling and interpretation of the tracer plume migration in an aquifer. Non-reactive tracer experiments were conducted at the Hanford 300 Area, along with constant-rate injection tests and electromagnetic borehole flowmeter (EBF) profiling. A Bayesian data assimilation technique, the method of anchored distributions (MAD) [Rubin et al., 2010], was applied to assimilate the experimental tracer test data with the other types of data and to infer the three-dimensional heterogeneous structure of the hydraulic conductivity in the saturated zone of themore » Hanford formation. In this study, the Bayesian prior information on the underlying random hydraulic conductivity field was obtained from previous field characterization efforts using the constant-rate injection tests and the EBF data. The posterior distribution of the conductivity field was obtained by further conditioning the field on the temporal moments of tracer breakthrough curves at various observation wells. MAD was implemented with the massively-parallel three-dimensional flow and transport code PFLOTRAN to cope with the highly transient flow boundary conditions at the site and to meet the computational demands of MAD. A synthetic study proved that the proposed method could effectively invert tracer test data to capture the essential spatial heterogeneity of the three-dimensional hydraulic conductivity field. Application of MAD to actual field data shows that the hydrogeological model, when conditioned on the tracer test data, can reproduce the tracer transport behavior better than the field characterized without the tracer test data. This study successfully demonstrates that MAD can sequentially assimilate multi-scale multi-type field data through a consistent Bayesian framework.« less
Bayesian approach for peak detection in two-dimensional chromatography.
Vivó-Truyols, Gabriel
2012-03-20
A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual one-dimensional peaks have been originated from the same compound and should then be arranged in a two-dimensional peak. The merging algorithm is based on Bayesian inference. The user sets prior information about certain parameters (e.g., second-dimension retention time variability, first-dimension band broadening, chromatographic noise). On the basis of these priors, the algorithm calculates the probability of myriads of peak arrangements (i.e., ways of merging one-dimensional peaks), finding which of them holds the highest value. Uncertainty in each parameter can be accounted by adapting conveniently its probability distribution function, which in turn may change the final decision of the most probable peak arrangement. It has been demonstrated that the Bayesian approach presented in this paper follows the chromatographers' intuition. The algorithm has been applied and tested with LC × LC and GC × GC data and takes around 1 min to process chromatograms with several thousands of peaks.
SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.
2013-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.
A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data
Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.
2016-01-01
Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method’s performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. PMID:26209598
Modeling haul-out behavior of walruses in Bering Sea ice
Udevitz, M.S.; Jay, C.V.; Fischbach, Anthony S.; Garlich-Miller, J. L.
2009-01-01
Understanding haul-out behavior of ice-associated pinnipeds is essential for designing and interpreting popula-tion surveys and for assessing effects of potential changes in their ice environments. We used satellite-linked transmitters to obtain sequential information about location and haul-out state for Pacific walruses, Odobenus rosmarus divergens (Il-liger, 1815), in the Bering Sea during April of 2004, 2005, and 2006. We used these data in a generalized mixed model of haul-out bout durations and a hierarchical Bayesian model of haul-out probabilities to assess factors related to walrus haul-out behavior, and provide the first predictive model of walrus haul-out behavior in sea ice habitat. Average haul-out bout duration was 9 h, but durations of haul-out bouts tended to increase with durations of preceding in-water bouts. On aver-age, tagged walruses spent only about 17% of their time hauled out on sea ice. Probability of being hauled out decreased with wind speed, increased with temperature, and followed a diurnal cycle with the highest values in the evening. Our haul-out probability model can be used to estimate the proportion of the population that is unavailable for detection in spring surveys of Pacific walruses on sea ice.
NASA Astrophysics Data System (ADS)
Olsen, S.; Zaliapin, I.
2008-12-01
We establish positive correlation between the local spatio-temporal fluctuations of the earthquake magnitude distribution and the occurrence of regional earthquakes. In order to accomplish this goal, we develop a sequential Bayesian statistical estimation framework for the b-value (slope of the Gutenberg-Richter's exponential approximation to the observed magnitude distribution) and for the ratio a(t) between the earthquake intensities in two non-overlapping magnitude intervals. The time-dependent dynamics of these parameters is analyzed using Markov Chain Models (MCM). The main advantage of this approach over the traditional window-based estimation is its "soft" parameterization, which allows one to obtain stable results with realistically small samples. We furthermore discuss a statistical methodology for establishing lagged correlations between continuous and point processes. The developed methods are applied to the observed seismicity of California, Nevada, and Japan on different temporal and spatial scales. We report an oscillatory dynamics of the estimated parameters, and find that the detected oscillations are positively correlated with the occurrence of large regional earthquakes, as well as with small events with magnitudes as low as 2.5. The reported results have important implications for further development of earthquake prediction and seismic hazard assessment methods.
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
A detailed description of the sequential probability ratio test for 2-IMU FDI
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.
Sadeque, Farig; Xu, Dongfang; Bethard, Steven
2017-01-01
The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users’ posts to Reddit. In this paper we present the techniques employed for the University of Arizona team’s participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets. PMID:29075167
Bayesian Community Detection in the Space of Group-Level Functional Differences
Venkataraman, Archana; Yang, Daniel Y.-J.; Pelphrey, Kevin A.; Duncan, James S.
2017-01-01
We propose a unified Bayesian framework to detect both hyper- and hypo-active communities within whole-brain fMRI data. Specifically, our model identifies dense subgraphs that exhibit population-level differences in functional synchrony between a control and clinical group. We derive a variational EM algorithm to solve for the latent posterior distributions and parameter estimates, which subsequently inform us about the afflicted network topology. We demonstrate that our method provides valuable insights into the neural mechanisms underlying social dysfunction in autism, as verified by the Neurosynth meta-analytic database. In contrast, both univariate testing and community detection via recursive edge elimination fail to identify stable functional communities associated with the disorder. PMID:26955022
Bayesian Community Detection in the Space of Group-Level Functional Differences.
Venkataraman, Archana; Yang, Daniel Y-J; Pelphrey, Kevin A; Duncan, James S
2016-08-01
We propose a unified Bayesian framework to detect both hyper- and hypo-active communities within whole-brain fMRI data. Specifically, our model identifies dense subgraphs that exhibit population-level differences in functional synchrony between a control and clinical group. We derive a variational EM algorithm to solve for the latent posterior distributions and parameter estimates, which subsequently inform us about the afflicted network topology. We demonstrate that our method provides valuable insights into the neural mechanisms underlying social dysfunction in autism, as verified by the Neurosynth meta-analytic database. In contrast, both univariate testing and community detection via recursive edge elimination fail to identify stable functional communities associated with the disorder.
Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li
2015-10-01
We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Detection of multiple damages employing best achievable eigenvectors under Bayesian inference
NASA Astrophysics Data System (ADS)
Prajapat, Kanta; Ray-Chaudhuri, Samit
2018-05-01
A novel approach is presented in this work to localize simultaneously multiple damaged elements in a structure along with the estimation of damage severity for each of the damaged elements. For detection of damaged elements, a best achievable eigenvector based formulation has been derived. To deal with noisy data, Bayesian inference is employed in the formulation wherein the likelihood of the Bayesian algorithm is formed on the basis of errors between the best achievable eigenvectors and the measured modes. In this approach, the most probable damage locations are evaluated under Bayesian inference by generating combinations of various possible damaged elements. Once damage locations are identified, damage severities are estimated using a Bayesian inference Markov chain Monte Carlo simulation. The efficiency of the proposed approach has been demonstrated by carrying out a numerical study involving a 12-story shear building. It has been found from this study that damage scenarios involving as low as 10% loss of stiffness in multiple elements are accurately determined (localized and severities quantified) even when 2% noise contaminated modal data are utilized. Further, this study introduces a term parameter impact (evaluated based on sensitivity of modal parameters towards structural parameters) to decide the suitability of selecting a particular mode, if some idea about the damaged elements are available. It has been demonstrated here that the accuracy and efficiency of the Bayesian quantification algorithm increases if damage localization is carried out a-priori. An experimental study involving a laboratory scale shear building and different stiffness modification scenarios shows that the proposed approach is efficient enough to localize the stories with stiffness modification.
The image recognition based on neural network and Bayesian decision
NASA Astrophysics Data System (ADS)
Wang, Chugege
2018-04-01
The artificial neural network began in 1940, which is an important part of artificial intelligence. At present, it has become a hot topic in the fields of neuroscience, computer science, brain science, mathematics, and psychology. Thomas Bayes firstly reported the Bayesian theory in 1763. After the development in the twentieth century, it has been widespread in all areas of statistics. In recent years, due to the solution of the problem of high-dimensional integral calculation, Bayesian Statistics has been improved theoretically, which solved many problems that cannot be solved by classical statistics and is also applied to the interdisciplinary fields. In this paper, the related concepts and principles of the artificial neural network are introduced. It also summarizes the basic content and principle of Bayesian Statistics, and combines the artificial neural network technology and Bayesian decision theory and implement them in all aspects of image recognition, such as enhanced face detection method based on neural network and Bayesian decision, as well as the image classification based on the Bayesian decision. It can be seen that the combination of artificial intelligence and statistical algorithms has always been the hot research topic.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080
Three-body dissociation of OCS3+: Separating sequential and concerted pathways
NASA Astrophysics Data System (ADS)
Kumar, Herendra; Bhatt, Pragya; Safvan, C. P.; Rajput, Jyoti
2018-02-01
Events from the sequential and concerted modes of the fragmentation of OCS3+ that result in coincident detection of fragments C+, O+, and S+ have been separated using a newly proposed representation. An ion beam of 1.8 MeV Xe9+ is used to make the triply charged molecular ion, with the fragments being detected by a recoil ion momentum spectrometer. By separating events belonging exclusively to the sequential mode of breakup, the electronic states of the intermediate molecular ion (CO2+ or CS2+) involved are determined, and from the kinetic energy release spectra, it is shown that the low lying excited states of the parent OCS3+ are responsible for this mechanism. An estimate of branching ratios of events coming from sequential versus concerted mode is presented.
Tom, Jennifer A; Sinsheimer, Janet S; Suchard, Marc A
Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, often compared using point estimates that fail to account for the variability within and correlation between the distributions these realizations approximate. However, although the initial concession to stratify generally precludes the more sensible analysis using a single joint hierarchical model, we can circumvent this outcome and capitalize on the intermediate realizations by extending the dynamic iterative reweighting MCMC algorithm. In doing so, we reuse the available realizations by reweighting them with importance weights, recycling them into a now tractable joint hierarchical model. We apply this technique to intermediate realizations generated from stratified analyses of 687 influenza A genomes spanning 13 years allowing us to revisit hypotheses regarding the evolutionary history of influenza within a hierarchical statistical framework.
Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.
2015-01-01
Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, often compared using point estimates that fail to account for the variability within and correlation between the distributions these realizations approximate. However, although the initial concession to stratify generally precludes the more sensible analysis using a single joint hierarchical model, we can circumvent this outcome and capitalize on the intermediate realizations by extending the dynamic iterative reweighting MCMC algorithm. In doing so, we reuse the available realizations by reweighting them with importance weights, recycling them into a now tractable joint hierarchical model. We apply this technique to intermediate realizations generated from stratified analyses of 687 influenza A genomes spanning 13 years allowing us to revisit hypotheses regarding the evolutionary history of influenza within a hierarchical statistical framework. PMID:26681992
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
NASA Astrophysics Data System (ADS)
Gao, J.; Lythe, M. B.
1996-06-01
This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Payne, T.; Kinateder, K.; Dao, P.; Beecher, E.; Boone, D.; Elliott, B.
The objective of on-line flagging in this paper is to perform interactive assessment of geosynchronous satellites anomalies such as cross-tagging of a satellites in a cluster, solar panel offset change, etc. This assessment will utilize a Bayesian belief propagation procedure and will include automated update of baseline signature data for the satellite, while accounting for the seasonal changes. Its purpose is to enable an ongoing, automated assessment of satellite behavior through its life cycle using the photometry data collected during the synoptic search performed by a ground or space-based sensor as a part of its metrics mission. The change in the satellite features will be reported along with the probabilities of Type I and Type II errors. The objective of adaptive sequential hypothesis testing in this paper is to define future sensor tasking for the purpose of characterization of fine features of the satellite. The tasking will be designed in order to maximize new information with the least number of photometry data points to be collected during the synoptic search by a ground or space-based sensor. Its calculation is based on the utilization of information entropy techniques. The tasking is defined by considering a sequence of hypotheses in regard to the fine features of the satellite. The optimal observation conditions are then ordered in order to maximize new information about a chosen fine feature. The combined objective of on-line flagging and adaptive sequential hypothesis testing is to progressively discover new information about the features of a geosynchronous satellites by leveraging the regular but sparse cadence of data collection during the synoptic search performed by a ground or space-based sensor. Automated Algorithm to Detect Changes in Geostationary Satellite's Configuration and Cross-Tagging Phan Dao, Air Force Research Laboratory/RVB By characterizing geostationary satellites based on photometry and color photometry, analysts can evaluate satellite operational status and affirm its true identity. The process of ingesting photometry data and deriving satellite physical characteristics can be directed by analysts in a batch mode, meaning using a batch of recent data, or by automated algorithms in an on-line mode in which the assessment is updated with each new data point. Tools used for detecting change to satellite's status or identity, whether performed with a human in the loop or automated algorithms, are generally not built to detect with minimum latency and traceable confidence intervals. To alleviate those deficiencies, we investigate the use of Hidden Markov Models (HMM), in a Bayesian Network framework, to infer the hidden state (changed or unchanged) of a three-axis stabilized geostationary satellite using broadband and color photometry. Unlike frequentist statistics which exploit only the stationary statistics of the observables in the database, HMM also exploits the temporal pattern of the observables as well. The algorithm also operates in “learning” mode to gradually evolve the HMM and accommodate natural changes such as due to the seasonal dependence of GEO satellite's light curve. Our technique is designed to operate with missing color data. The version that ingests both panchromatic and color data can accommodate gaps in color photometry data. That attribute is important because while color indices, e.g. Johnson R and B, enhance the belief (probability) of a hidden state, in real world situations, flux data is collected sporadically in an untasked collect, and color data is limited and sometimes absent. Fluxes are measured with experimental error whose effect on the algorithm will be studied. Photometry data in the AFRL's Geo Color Photometry Catalog and Geo Observations with Latitudinal Diversity Simultaneously (GOLDS) data sets are used to simulate a wide variety of operational changes and identity cross tags. The algorithm is tested against simulated sequences of observed magnitudes, mimicking both the cadence of untasked SSN and other ground sensors, occasional operational changes and possible occurrence of cross tags of in-cluster satellites. We would like to show that the on-line algorithm can detect change; sometimes right after the first post-change data point is analyzed, for zero latency. We also want to show the unsupervised “learning” capability that allows the HMM to evolve with time without user's assistance. For example, the users are not required to “label” the true state of the data points.
NASA Astrophysics Data System (ADS)
Giannini, Valentina; Vignati, Anna; Mazzetti, Simone; De Luca, Massimo; Bracco, Christian; Stasi, Michele; Russo, Filippo; Armando, Enrico; Regge, Daniele
2013-02-01
Prostate specific antigen (PSA)-based screening reduces the rate of death from prostate cancer (PCa) by 31%, but this benefit is associated with a high risk of overdiagnosis and overtreatment. As prostate transrectal ultrasound-guided biopsy, the standard procedure for prostate histological sampling, has a sensitivity of 77% with a considerable false-negative rate, more accurate methods need to be found to detect or rule out significant disease. Prostate magnetic resonance imaging has the potential to improve the specificity of PSA-based screening scenarios as a non-invasive detection tool, in particular exploiting the combination of anatomical and functional information in a multiparametric framework. The purpose of this study was to describe a computer aided diagnosis (CAD) method that automatically produces a malignancy likelihood map by combining information from dynamic contrast enhanced MR images and diffusion weighted images. The CAD system consists of multiple sequential stages, from a preliminary registration of images of different sequences, in order to correct for susceptibility deformation and/or movement artifacts, to a Bayesian classifier, which fused all the extracted features into a probability map. The promising results (AUROC=0.87) should be validated on a larger dataset, but they suggest that the discrimination on a voxel basis between benign and malignant tissues is feasible with good performances. This method can be of benefit to improve the diagnostic accuracy of the radiologist, reduce reader variability and speed up the reading time, automatically highlighting probably cancer suspicious regions.
Bayesian inference of a historical bottleneck in a heavily exploited marine mammal.
Hoffman, J I; Grant, S M; Forcada, J; Phillips, C D
2011-10-01
Emerging Bayesian analytical approaches offer increasingly sophisticated means of reconstructing historical population dynamics from genetic data, but have been little applied to scenarios involving demographic bottlenecks. Consequently, we analysed a large mitochondrial and microsatellite dataset from the Antarctic fur seal Arctocephalus gazella, a species subjected to one of the most extreme examples of uncontrolled exploitation in history when it was reduced to the brink of extinction by the sealing industry during the late eighteenth and nineteenth centuries. Classical bottleneck tests, which exploit the fact that rare alleles are rapidly lost during demographic reduction, yielded ambiguous results. In contrast, a strong signal of recent demographic decline was detected using both Bayesian skyline plots and Approximate Bayesian Computation, the latter also allowing derivation of posterior parameter estimates that were remarkably consistent with historical observations. This was achieved using only contemporary samples, further emphasizing the potential of Bayesian approaches to address important problems in conservation and evolutionary biology. © 2011 Blackwell Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bojechko, Casey; Phillps, Mark; Kalet, Alan
Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less
Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.
Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y
2007-01-01
Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.
Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model
NASA Astrophysics Data System (ADS)
Wawrzynczak, A.; Kopka, P.
2017-12-01
Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.
Chapinal, Núria; Schumaker, Brant A; Joly, Damien O; Elkin, Brett T; Stephen, Craig
2015-07-01
We estimated the sensitivity and specificity of the caudal-fold skin test (CFT), the fluorescent polarization assay (FPA), and the rapid lateral-flow test (RT) for the detection of Mycobacterium bovis in free-ranging wild wood bison (Bison bison athabascae), in the absence of a gold standard, by using Bayesian analysis, and then used those estimates to forecast the performance of a pairwise combination of tests in parallel. In 1998-99, 212 wood bison from Wood Buffalo National Park (Canada) were tested for M. bovis infection using CFT and two serologic tests (FPA and RT). The sensitivity and specificity of each test were estimated using a three-test, one-population, Bayesian model allowing for conditional dependence between FPA and RT. The sensitivity and specificity of the combination of CFT and each serologic test in parallel were calculated assuming conditional independence. The test performance estimates were influenced by the prior values chosen. However, the rank of tests and combinations of tests based on those estimates remained constant. The CFT was the most sensitive test and the FPA was the least sensitive, whereas RT was the most specific test and CFT was the least specific. In conclusion, given the fact that gold standards for the detection of M. bovis are imperfect and difficult to obtain in the field, Bayesian analysis holds promise as a tool to rank tests and combinations of tests based on their performance. Combining a skin test with an animal-side serologic test, such as RT, increases sensitivity in the detection of M. bovis and is a good approach to enhance disease eradication or control in wild bison.
A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data.
Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P; Engel, Lawrence S; Kwok, Richard K; Blair, Aaron; Stewart, Patricia A
2016-01-01
Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method's performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Capturing and Displaying Uncertainty in the Common Tactical/Environmental Picture
2003-09-30
multistatic active detection, and incorporated this characterization into a Bayesian track - before - detect system called, the Likelihood Ratio Tracker (LRT...prediction uncertainty in a track before detect system for multistatic active sonar. The approach has worked well on limited simulation data. IMPACT
Social Influences in Sequential Decision Making
Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael
2016-01-01
People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions. PMID:26784448
Optimal decision-making in mammals: insights from a robot study of rodent texture discrimination
Lepora, Nathan F.; Fox, Charles W.; Evans, Mathew H.; Diamond, Mathew E.; Gurney, Kevin; Prescott, Tony J.
2012-01-01
Texture perception is studied here in a physical model of the rat whisker system consisting of a robot equipped with a biomimetic vibrissal sensor. Investigations of whisker motion in rodents have led to several explanations for texture discrimination, such as resonance or stick-slips. Meanwhile, electrophysiological studies of decision-making in monkeys have suggested a neural mechanism of evidence accumulation to threshold for competing percepts, described by a probabilistic model of Bayesian sequential analysis. For our robot whisker data, we find that variable reaction-time decision-making with sequential analysis performs better than the fixed response-time maximum-likelihood estimation. These probabilistic classifiers also use whatever available features of the whisker signals aid the discrimination, giving improved performance over a single-feature strategy, such as matching the peak power spectra of whisker vibrations. These results cast new light on how the various proposals for texture discrimination in rodents depend on the whisker contact mechanics and suggest the possibility of a common account of decision-making across mammalian species. PMID:22279155
Social Influences in Sequential Decision Making.
Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael
2016-01-01
People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.
Bayesian performance metrics of binary sensors in homeland security applications
NASA Astrophysics Data System (ADS)
Jannson, Tomasz P.; Forrester, Thomas C.
2008-04-01
Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.
A Bayesian state-space approach for damage detection and classification
NASA Astrophysics Data System (ADS)
Dzunic, Zoran; Chen, Justin G.; Mobahi, Hossein; Büyüköztürk, Oral; Fisher, John W.
2017-11-01
The problem of automatic damage detection in civil structures is complex and requires a system that can interpret collected sensor data into meaningful information. We apply our recently developed switching Bayesian model for dependency analysis to the problems of damage detection and classification. The model relies on a state-space approach that accounts for noisy measurement processes and missing data, which also infers the statistical temporal dependency between measurement locations signifying the potential flow of information within the structure. A Gibbs sampling algorithm is used to simultaneously infer the latent states, parameters of the state dynamics, the dependence graph, and any changes in behavior. By employing a fully Bayesian approach, we are able to characterize uncertainty in these variables via their posterior distribution and provide probabilistic estimates of the occurrence of damage or a specific damage scenario. We also implement a single class classification method which is more realistic for most real world situations where training data for a damaged structure is not available. We demonstrate the methodology with experimental test data from a laboratory model structure and accelerometer data from a real world structure during different environmental and excitation conditions.
Initial Evaluation of Signal-Based Bayesian Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.; Russell, S.
2016-12-01
We present SIGVISA (Signal-based Vertically Integrated Seismic Analysis), a next-generation system for global seismic monitoring through Bayesian inference on seismic signals. Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a network of stations. We report results from an evaluation of SIGVISA monitoring the western United States for a two-week period following the magnitude 6.0 event in Wells, NV in February 2008. During this period, SIGVISA detects more than twice as many events as NETVISA, and three times as many as SEL3, while operating at the same precision; at lower precisions it detects up to five times as many events as SEL3. At the same time, signal-based monitoring reduces mean location errors by a factor of four relative to detection-based systems. We provide evidence that, given only IMS data, SIGVISA detects events that are missed by regional monitoring networks, indicating that our evaluations may even underestimate its performance. Finally, SIGVISA matches or exceeds the detection rates of existing systems for de novo events - events with no nearby historical seismicity - and detects through automated processing a number of such events missed even by the human analysts generating the LEB.
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
The behavior of complex aerospace systems is governed by numerous parameters. For safety analysis it is important to understand how the system behaves with respect to these parameter values. In particular, understanding the boundaries between safe and unsafe regions is of major importance. In this paper, we describe a hierarchical Bayesian statistical modeling approach for the online detection and characterization of such boundaries. Our method for classification with active learning uses a particle filter-based model and a boundary-aware metric for best performance. From a library of candidate shapes incorporated with domain expert knowledge, the location and parameters of the boundaries are estimated using advanced Bayesian modeling techniques. The results of our boundary analysis are then provided in a form understandable by the domain expert. We illustrate our approach using a simulation model of a NASA neuro-adaptive flight control system, as well as a system for the detection of separation violations in the terminal airspace.
A Bayesian approach to traffic light detection and mapping
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, Siavash; Yilmaz, Alper
2017-03-01
Automatic traffic light detection and mapping is an open research problem. The traffic lights vary in color, shape, geolocation, activation pattern, and installation which complicate their automated detection. In addition, the image of the traffic lights may be noisy, overexposed, underexposed, or occluded. In order to address this problem, we propose a Bayesian inference framework to detect and map traffic lights. In addition to the spatio-temporal consistency constraint, traffic light characteristics such as color, shape and height is shown to further improve the accuracy of the proposed approach. The proposed approach has been evaluated on two benchmark datasets and has been shown to outperform earlier studies. The results show that the precision and recall rates for the KITTI benchmark are 95.78 % and 92.95 % respectively and the precision and recall rates for the LARA benchmark are 98.66 % and 94.65 % .
NASA Astrophysics Data System (ADS)
Tsai, F.; Lai, J. S.; Chiang, S. H.
2015-12-01
Landslides are frequently triggered by typhoons and earthquakes in Taiwan, causing serious economic losses and human casualties. Remotely sensed images and geo-spatial data consisting of land-cover and environmental information have been widely used for producing landslide inventories and causative factors for slope stability analysis. Landslide susceptibility, on the other hand, can represent the spatial likelihood of landslide occurrence and is an important basis for landslide risk assessment. As multi-temporal satellite images become popular and affordable, they are commonly used to generate landslide inventories for subsequent analysis. However, it is usually difficult to distinguish different landslide sub-regions (scarp, debris flow, deposition etc.) directly from remote sensing imagery. Consequently, the extracted landslide extents using image-based visual interpretation and automatic detections may contain many depositions that may reduce the fidelity of the landslide susceptibility model. This study developed an empirical thresholding scheme based on terrain characteristics for eliminating depositions from detected landslide areas to improve landslide susceptibility modeling. In this study, Bayesian network classifier is utilized to build a landslide susceptibility model and to predict sequent rainfall-induced shallow landslides in the Shimen reservoir watershed located in northern Taiwan. Eleven causative factors are considered, including terrain slope, aspect, curvature, elevation, geology, land-use, NDVI, soil, distance to fault, river and road. Landslide areas detected using satellite images acquired before and after eight typhoons between 2004 to 2008 are collected as the main inventory for training and verification. In the analysis, previous landslide events are used as training data to predict the samples of the next event. The results are then compared with recorded landslide areas in the inventory to evaluate the accuracy. Experimental results demonstrate that the accuracies of landslide susceptibility analysis in all sequential predictions have been improved significantly after eliminating landslide depositions.
Shankle, William R; Pooley, James P; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D
2013-01-01
Determining how cognition affects functional abilities is important in Alzheimer disease and related disorders. A total of 280 patients (normal or Alzheimer disease and related disorders) received a total of 1514 assessments using the functional assessment staging test (FAST) procedure and the MCI Screen. A hierarchical Bayesian cognitive processing model was created by embedding a signal detection theory model of the MCI Screen-delayed recognition memory task into a hierarchical Bayesian framework. The signal detection theory model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the 6 FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. Hierarchical Bayesian cognitive processing models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition into a continuous measure of functional severity for both individuals and FAST groups. Such a translation links 2 levels of brain information processing and may enable more accurate correlations with other levels, such as those characterized by biomarkers.
Stefan-van Staden, Raluca-Ioana; Bokretsion, Rahel Girmai; van Staden, Jacobus F; Aboul-Enein, Hassan Y
2006-01-01
Carbon paste based biosensors for the determination of creatine and creatinine have been integrated into a sequential injection system. Applying the multi-enzyme sequence of creatininase (CA), and/or creatinase (CI) and sarcosine oxidase (SO), hydrogen peroxide has been detected amperometrically. The linear concentration ranges are of pmol/L to nmol/L magnitude, with very low limits of detection. The proposed SIA system can be utilized reliably for the on-line simultaneous detection of creatine and creatinine in pharmaceutical products, as well as in serum samples, with a rate of 34 samples per hour and RSD values better than 0.16% (n=10).
A Bayesian Method for Managing Uncertainties Relating to Distributed Multistatic Sensor Search
2006-07-01
before - detect process. There will also be an increased probability of high signal-to-noise ratio (SNR) detections associated with specular and near...and high target strength and high Doppler opportunities give rise to the expectation of an increased number of detections that could feed a track
Stochastic Control of Multi-Scale Networks: Modeling, Analysis and Algorithms
2014-10-20
Theory, (02 2012): 0. doi: B. T. Swapna, Atilla Eryilmaz, Ness B. Shroff. Throughput-Delay Analysis of Random Linear Network Coding for Wireless ... Wireless Sensor Networks and Effects of Long-Range Dependent Data, Sequential Analysis , (10 2012): 0. doi: 10.1080/07474946.2012.719435 Stefano...Sequential Analysis , (10 2012): 0. doi: John S. Baras, Shanshan Zheng. Sequential Anomaly Detection in Wireless Sensor Networks andEffects of Long
Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Mengshoel, Ole
2008-01-01
Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.
Thermal bioaerosol cloud tracking with Bayesian classification
NASA Astrophysics Data System (ADS)
Smith, Christian W.; Dupuis, Julia R.; Schundler, Elizabeth C.; Marinelli, William J.
2017-05-01
The development of a wide area, bioaerosol early warning capability employing existing uncooled thermal imaging systems used for persistent perimeter surveillance is discussed. The capability exploits thermal imagers with other available data streams including meteorological data and employs a recursive Bayesian classifier to detect, track, and classify observed thermal objects with attributes consistent with a bioaerosol plume. Target detection is achieved based on similarity to a phenomenological model which predicts the scene-dependent thermal signature of bioaerosol plumes. Change detection in thermal sensor data is combined with local meteorological data to locate targets with the appropriate thermal characteristics. Target motion is tracked utilizing a Kalman filter and nearly constant velocity motion model for cloud state estimation. Track management is performed using a logic-based upkeep system, and data association is accomplished using a combinatorial optimization technique. Bioaerosol threat classification is determined using a recursive Bayesian classifier to quantify the threat probability of each tracked object. The classifier can accept additional inputs from visible imagers, acoustic sensors, and point biological sensors to improve classification confidence. This capability was successfully demonstrated for bioaerosol simulant releases during field testing at Dugway Proving Grounds. Standoff detection at a range of 700m was achieved for as little as 500g of anthrax simulant. Developmental test results will be reviewed for a range of simulant releases, and future development and transition plans for the bioaerosol early warning platform will be discussed.
Sparse source configurations in radio tomography of asteroids
NASA Astrophysics Data System (ADS)
Pursiainen, S.; Kaasalainen, M.
2014-07-01
Our research targets at progress in non-invasive imaging of asteroids to support future planetary research and extra-terrestrial mining activities. This presentation concerns principally radio tomography in which the permittivity distribution inside an asteroid is to be recovered based on the radio frequency signal transmitted from the asteroid's surface and gathered by an orbiter. The focus will be on a sparse distribution (Pursiainen and Kaasalainen, 2013) of signal sources that can be necessary in the challenging in situ environment and within tight payload limits. The general goal in our recent research has been to approximate the minimal number of source positions needed for robust localization of anomalies caused, for example, by an internal void. Characteristic to the localization problem are the large relative changes in signal speed caused by the high permittivity of typical asteroid minerals (e.g. basalt), meaning that a signal path can include strong refractions and reflections. This presentation introduces results of a laboratory experiment in which real travel time data was inverted using a hierarchical Bayesian approach combined with the iterative alternating sequential (IAS) posterior exploration algorithm. Special interest was paid to robustness of the inverse results regarding changes of the prior model and source positioning. According to our results, strongly refractive anomalies can be detected with three or four sources independently of their positioning.
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
Utility-based designs for randomized comparative trials with categorical outcomes
Murray, Thomas A.; Thall, Peter F.; Yuan, Ying
2016-01-01
A general utility-based testing methodology for design and conduct of randomized comparative clinical trials with categorical outcomes is presented. Numerical utilities of all elementary events are elicited to quantify their desirabilities. These numerical values are used to map the categorical outcome probability vector of each treatment to a mean utility, which is used as a one-dimensional criterion for constructing comparative tests. Bayesian tests are presented, including fixed sample and group sequential procedures, assuming Dirichlet-multinomial models for the priors and likelihoods. Guidelines are provided for establishing priors, eliciting utilities, and specifying hypotheses. Efficient posterior computation is discussed, and algorithms are provided for jointly calibrating test cutoffs and sample size to control overall type I error and achieve specified power. Asymptotic approximations for the power curve are used to initialize the algorithms. The methodology is applied to re-design a completed trial that compared two chemotherapy regimens for chronic lymphocytic leukemia, in which an ordinal efficacy outcome was dichotomized and toxicity was ignored to construct the trial’s design. The Bayesian tests also are illustrated by several types of categorical outcomes arising in common clinical settings. Freely available computer software for implementation is provided. PMID:27189672
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
Bayesian data assimilation provides rapid decision support for vector-borne diseases
Jewell, Chris P.; Brown, Richard G.
2015-01-01
Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Although host population data are typically available, for novel disease introductions there is a high chance of the pathogen using a vector for which data are unavailable. This presents a barrier to estimating the parameters of dynamical models representing host–vector–pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds and provide evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks. PMID:26136225
Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2017-05-01
Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.
Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina
2013-01-01
In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.
Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.
Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W
2016-12-01
We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Developing a new Bayesian Risk Index for risk evaluation of soil contamination.
Albuquerque, M T D; Gerassis, S; Sierra, C; Taboada, J; Martín, J E; Antunes, I M H R; Gallego, J R
2017-12-15
Industrial and agricultural activities heavily constrain soil quality. Potentially Toxic Elements (PTEs) are a threat to public health and the environment alike. In this regard, the identification of areas that require remediation is crucial. In the herein research a geochemical dataset (230 samples) comprising 14 elements (Cu, Pb, Zn, Ag, Ni, Mn, Fe, As, Cd, V, Cr, Ti, Al and S) was gathered throughout eight different zones distinguished by their main activity, namely, recreational, agriculture/livestock and heavy industry in the Avilés Estuary (North of Spain). Then a stratified systematic sampling method was used at short, medium, and long distances from each zone to obtain a representative picture of the total variability of the selected attributes. The information was then combined in four risk classes (Low, Moderate, High, Remediation) following reference values from several sediment quality guidelines (SQGs). A Bayesian analysis, inferred for each zone, allowed the characterization of PTEs correlations, the unsupervised learning network technique proving to be the best fit. Based on the Bayesian network structure obtained, Pb, As and Mn were selected as key contamination parameters. For these 3 elements, the conditional probability obtained was allocated to each observed point, and a simple, direct index (Bayesian Risk Index-BRI) was constructed as a linear rating of the pre-defined risk classes weighted by the previously obtained probability. Finally, the BRI underwent geostatistical modeling. One hundred Sequential Gaussian Simulations (SGS) were computed. The Mean Image and the Standard Deviation maps were obtained, allowing the definition of High/Low risk clusters (Local G clustering) and the computation of spatial uncertainty. High-risk clusters are mainly distributed within the area with the highest altitude (agriculture/livestock) showing an associated low spatial uncertainty, clearly indicating the need for remediation. Atmospheric emissions, mainly derived from the metallurgical industry, contribute to soil contamination by PTEs. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shaw, Darren; Stone, Kevin; Ho, K. C.; Keller, James M.; Luke, Robert H.; Burns, Brian P.
2016-05-01
Forward looking ground penetrating radar (FLGPR) has the benefit of detecting objects at a significant standoff distance. The FLGPR signal is radiated over a large surface area and the radar signal return is often weak. Improving detection, especially for buried in road targets, while maintaining an acceptable false alarm rate remains to be a challenging task. Various kinds of features have been developed over the years to increase the FLGPR detection performance. This paper focuses on investigating the use of as many features as possible for detecting buried targets and uses the sequential feature selection technique to automatically choose the features that contribute most for improving performance. Experimental results using data collected at a government test site are presented.
A-Track: Detecting Moving Objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2017-04-01
A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.
The double slit experiment and the time reversed fire alarm
NASA Astrophysics Data System (ADS)
Halabi, Tarek
2011-03-01
When both slits of the double slit experiment are open, closing one paradoxically increases the detection rate at some points on the detection screen. Feynman famously warned that temptation to "understand" such a puzzling feature only draws us into blind alleys. Nevertheless, we gain insight into this feature by drawing an analogy between the double slit experiment and a time reversed fire alarm. Much as closing the slit increases probability of a future detection, ruling out fire drill scenarios, having heard the fire alarm, increases probability of a past fire (using Bayesian inference). Classically, Bayesian inference is associated with computing probabilities of past events. We therefore identify this feature of the double slit experiment with a time reversed thermodynamic arrow. We believe that much of the enigma of quantum mechanics is simply due to some variation of time's arrow.
A comment on priors for Bayesian occupancy models.
Northrup, Joseph M; Gerber, Brian D
2018-01-01
Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are "uninformative" or "vague", such priors can easily be unintentionally highly informative. Here we report on how the specification of a "vague" normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts.
Arbitrary norm support vector machines.
Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R
2009-02-01
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
Mathew, Boby; Léon, Jens; Sannemann, Wiebke; Sillanpää, Mikko J.
2018-01-01
Gene-by-gene interactions, also known as epistasis, regulate many complex traits in different species. With the availability of low-cost genotyping it is now possible to study epistasis on a genome-wide scale. However, identifying genome-wide epistasis is a high-dimensional multiple regression problem and needs the application of dimensionality reduction techniques. Flowering Time (FT) in crops is a complex trait that is known to be influenced by many interacting genes and pathways in various crops. In this study, we successfully apply Sure Independence Screening (SIS) for dimensionality reduction to identify two-way and three-way epistasis for the FT trait in a Multiparent Advanced Generation Inter-Cross (MAGIC) barley population using the Bayesian multilocus model. The MAGIC barley population was generated from intercrossing among eight parental lines and thus, offered greater genetic diversity to detect higher-order epistatic interactions. Our results suggest that SIS is an efficient dimensionality reduction approach to detect high-order interactions in a Bayesian multilocus model. We also observe that many of our findings (genomic regions with main or higher-order epistatic effects) overlap with known candidate genes that have been already reported in barley and closely related species for the FT trait. PMID:29254994
Le Bras, Ronan J; Kuzma, Heidi; Sucic, Victor; Bokelmann, Götz
2016-05-01
A notable sequence of calls was encountered, spanning several days in January 2003, in the central part of the Indian Ocean on a hydrophone triplet recording acoustic data at a 250 Hz sampling rate. This paper presents signal processing methods applied to the waveform data to detect, group, extract amplitude and bearing estimates for the recorded signals. An approximate location for the source of the sequence of calls is inferred from extracting the features from the waveform. As the source approaches the hydrophone triplet, the source level (SL) of the calls is estimated at 187 ± 6 dB re: 1 μPa-1 m in the 15-60 Hz frequency range. The calls are attributed to a subgroup of blue whales, Balaenoptera musculus, with a characteristic acoustic signature. A Bayesian location method using probabilistic models for bearing and amplitude is demonstrated on the calls sequence. The method is applied to the case of detection at a single triad of hydrophones and results in a probability distribution map for the origin of the calls. It can be extended to detections at multiple triads and because of the Bayesian formulation, additional modeling complexity can be built-in as needed.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
Parallel heuristics for scalable community detection
Lu, Hao; Halappanavar, Mahantesh; Kalyanaraman, Ananth
2015-08-14
Community detection has become a fundamental operation in numerous graph-theoretic applications. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method ismore » also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains. Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing real speedups of up to 16x using 32 threads.« less
Smitherman, Todd A; Kuka, Alexander J; Calhoun, Anne H; Walters, A Brooke Pellegrino; Davis-Martin, Rachel E; Ambrose, Carrie E; Rains, Jeanetta C; Houle, Timothy T
2018-05-06
Insomnia is frequently comorbid with chronic migraine, and small trials suggest that cognitive-behavioral treatment of insomnia (CBTi) may reduce migraine frequency. This study endeavored to provide a quantitative synthesis of existing CBTi trials for adults with chronic migraine using Bayesian statistical methods, given their utility in combining prior knowledge with sequentially gathered data. Completer analyses of 2 randomized trials comparing CBTi to a sham control intervention (Calhoun and Ford, 2007; Smitherman et al, 2016) were used to quantify the effects of a brief course of treatment on headache frequency. Change in headache frequency from baseline to the primary endpoint (6-8 weeks posttreatment) was regressed on group status using a Gaussian linear model with each study specified in the order of completion. To estimate the combined effect, posterior distributions from the Calhoun and Ford study were used as informative priors for conditioning on the Smitherman et al data. In a combined analysis of these prior studies, monthly headache frequency of the treatment group decreased by 6.2 days (95%CrI: -9.7 to -2.7) more than the control group, supporting an interpretation that there is a 97.5% chance that the treatment intervention is at least 2.7 days better than the control intervention. The analysis supports the hypothesis that at least for those who complete treatment, there is high probability that individuals who receive CBTi experience greater headache reduction than those who receive a control intervention equated for therapist time and out-of-session skills practice. Cognitive-behavioral interventions for comorbid insomnia hold promise for reducing headache frequency among those with chronic migraine. These findings add to a small but growing body of literature that migraineurs with comorbid conditions often respond well to behavioral interventions, and that targeting comorbidities may improve migraine itself. © 2018 American Headache Society.
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
Delay test generation for synchronous sequential circuits
NASA Astrophysics Data System (ADS)
Devadas, Srinivas
1989-05-01
We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.
Vasconcelos, Karla Anacleto de; Frota, Silvana Maria Monte Coelho; Ruffino-Netto, Antonio; Kritski, Afrânio Lineu
2018-04-01
To investigate early detection of amikacin-induced ototoxicity in a population treated for multidrug-resistant tuberculosis (MDR-TB), by means of three different tests: pure-tone audiometry (PTA); high-frequency audiometry (HFA); and distortion-product otoacoustic emission (DPOAE) testing. This was a longitudinal prospective cohort study involving patients aged 18-69 years with a diagnosis of MDR-TB who had to receive amikacin for six months as part of their antituberculosis drug regimen for the first time. Hearing was assessed before treatment initiation and at two and six months after treatment initiation. Sequential statistics were used to analyze the results. We included 61 patients, but the final population consisted of 10 patients (7 men and 3 women) because of sequential analysis. Comparison of the test results obtained at two and six months after treatment initiation with those obtained at baseline revealed that HFA at two months and PTA at six months detected hearing threshold shifts consistent with ototoxicity. However, DPOAE testing did not detect such shifts. The statistical method used in this study makes it possible to conclude that, over the six-month period, amikacin-associated hearing threshold shifts were detected by HFA and PTA, and that DPOAE testing was not efficient in detecting such shifts.
Henares, Terence G; Uenoyama, Yuta; Nogawa, Yuto; Ikegami, Ken; Citterio, Daniel; Suzuki, Koji; Funano, Shun-ichi; Sueyoshi, Kenji; Endo, Tatsuro; Hisamoto, Hideaki
2013-06-07
This paper presents a novel rhodamine diphosphate molecule that allows highly sensitive detection of proteins by employing sequential enzyme-linked immunosorbent assay and capillary isoelectric focusing (ELISA-cIEF). Seven-fold improvement in the immunoassay sensitivity and a 1-2 order of magnitude lower detection limit has been demonstrated by taking advantage of the combination of the enzyme-based signal amplification of ELISA and the concentration of enzyme reaction products by cIEF.
Bayesian Inference for Signal-Based Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.
2015-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. SIG-VISA (Signal-based Vertically Integrated Seismic Analysis) is a system for global seismic monitoring through Bayesian inference on seismic signals. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of recent geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a global network of stations. We demonstrate recent progress in scaling up SIG-VISA to efficiently process the data stream of global signals recorded by the International Monitoring System (IMS), including comparisons against existing processing methods that show increased sensitivity from our signal-based model and in particular the ability to locate events (including aftershock sequences that can tax analyst processing) precisely from waveform correlation effects. We also provide a Bayesian analysis of an alleged low-magnitude event near the DPRK test site in May 2010 [1] [2], investigating whether such an event could plausibly be detected through automated processing in a signal-based monitoring system. [1] Zhang, Miao and Wen, Lianxing. "Seismological Evidence for a Low-Yield Nuclear Test on 12 May 2010 in North Korea". Seismological Research Letters, January/February 2015. [2] Richards, Paul. "A Seismic Event in North Korea on 12 May 2010". CTBTO SnT 2015 oral presentation, video at https://video-archive.ctbto.org/index.php/kmc/preview/partner_id/103/uiconf_id/4421629/entry_id/0_ymmtpps0/delivery/http
Detecting ‘Wrong Blood in Tube’ Errors: Evaluation of a Bayesian Network Approach
Doctor, Jason N.; Strylewicz, Greg
2010-01-01
Objective In an effort to address the problem of laboratory errors, we develop and evaluate a method to detect mismatched specimens from nationally collected blood laboratory data in two experiments. Methods In Experiment 1 and 2 using blood labs from National Health and Nutrition Examination Survey (NHANES) and values derived from the Diabetes Prevention Program (DPP) respectively, a proportion of glucose and HbA1c specimens were randomly mismatched. A Bayesian network that encoded probabilistic relationships among analytes was used to predict mismatches. In Experiment 1 the performance of the network was compared against existing error detection software. In Experiment 2 the network was compared against 11 human experts recruited from the American Academy of Clinical Chemists. Results were compared via area under the receiver-operating characteristics curves (AUCs) and with agreement statistics. Results In Experiment 1 the network was most predictive of mismatches that produced clinically significant discrepancies between true and mismatched scores ((AUC of 0.87 (±0.04) for HbA1c and 0.83 (±0.02) for glucose), performed well in identifying errors among those self-reporting diabetes (N = 329) (AUC = 0.79 (± 0.02)) and performed significantly better than the established approach it was tested against (in all cases p < .0.05). In Experiment 2 it performed better (and in no case worse) than 7 of the 11 human experts. Average percent agreement was 0.79. and Kappa (κ) was 0.59, between experts and the Bayesian network. Conclusions Bayesian network can accurately identify mismatched specimens. The algorithm is best at identifying mismatches that result in a clinically significant magnitude of error. PMID:20566275
A Hierarchical Bayesian Model for Crowd Emotions
Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366
Wu, Jianyong; Gronewold, Andrew D; Rodriguez, Roberto A; Stewart, Jill R; Sobsey, Mark D
2014-02-01
Rapid quantification of viral pathogens in drinking and recreational water can help reduce waterborne disease risks. For this purpose, samples in small volume (e.g. 1L) are favored because of the convenience of collection, transportation and processing. However, the results of viral analysis are often subject to uncertainty. To overcome this limitation, we propose an approach that integrates Bayesian statistics, efficient concentration methods, and quantitative PCR (qPCR) to quantify viral pathogens in water. Using this approach, we quantified human adenoviruses (HAdVs) in eighteen samples of source water collected from six drinking water treatment plants. HAdVs were found in seven samples. In the other eleven samples, HAdVs were not detected by qPCR, but might have existed based on Bayesian inference. Our integrated approach that quantifies uncertainty provides a better understanding than conventional assessments of potential risks to public health, particularly in cases when pathogens may present a threat but cannot be detected by traditional methods. © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
In 1983 and 1984, the Infrared Astronomical Satellite (IRAS) detected 5,425 stellar objects and measured their infrared spectra. In 1987 a program called AUTOCLASS used Bayesian inference methods to discover the classes present in these data and determine the most probable class of each object, revealing unknown phenomena in astronomy. AUTOCLASS has rekindled the old debate on the suitability of Bayesian methods, which are computationally intensive, interpret probabilities as plausibility measures rather than frequencies, and appear to depend on a subjective assessment of the probability of a hypothesis before the data were collected. Modern statistical methods have, however, recently been shown to also depend on subjective elements. These debates bring into question the whole tradition of scientific objectivity and offer scientists a new way to take responsibility for their findings and conclusions.
Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert
2016-01-01
State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciuca, Razvan; Hernández, Oscar F., E-mail: razvan.ciuca@mail.mcgill.ca, E-mail: oscarh@physics.mcgill.ca
There exists various proposals to detect cosmic strings from Cosmic Microwave Background (CMB) or 21 cm temperature maps. Current proposals do not aim to find the location of strings on sky maps, all of these approaches can be thought of as a statistic on a sky map. We propose a Bayesian interpretation of cosmic string detection and within that framework, we derive a connection between estimates of cosmic string locations and cosmic string tension G μ. We use this Bayesian framework to develop a machine learning framework for detecting strings from sky maps and outline how to implement this frameworkmore » with neural networks. The neural network we trained was able to detect and locate cosmic strings on noiseless CMB temperature map down to a string tension of G μ=5 ×10{sup −9} and when analyzing a CMB temperature map that does not contain strings, the neural network gives a 0.95 probability that G μ≤2.3×10{sup −9}.« less
Bayesian analysis of multiple direct detection experiments
NASA Astrophysics Data System (ADS)
Arina, Chiara
2014-12-01
Bayesian methods offer a coherent and efficient framework for implementing uncertainties into induction problems. In this article, we review how this approach applies to the analysis of dark matter direct detection experiments. In particular we discuss the exclusion limit of XENON100 and the debated hints of detection under the hypothesis of a WIMP signal. Within parameter inference, marginalizing consistently over uncertainties to extract robust posterior probability distributions, we find that the claimed tension between XENON100 and the other experiments can be partially alleviated in isospin violating scenario, while elastic scattering model appears to be compatible with the frequentist statistical approach. We then move to model comparison, for which Bayesian methods are particularly well suited. Firstly, we investigate the annual modulation seen in CoGeNT data, finding that there is weak evidence for a modulation. Modulation models due to other physics compare unfavorably with the WIMP models, paying the price for their excessive complexity. Secondly, we confront several coherent scattering models to determine the current best physical scenario compatible with the experimental hints. We find that exothermic and inelastic dark matter are moderatly disfavored against the elastic scenario, while the isospin violating model has a similar evidence. Lastly the Bayes' factor gives inconclusive evidence for an incompatibility between the data sets of XENON100 and the hints of detection. The same question assessed with goodness of fit would indicate a 2 σ discrepancy. This suggests that more data are therefore needed to settle this question.
Matched Filter Stochastic Background Characterization for Hyperspectral Target Detection
2005-09-30
and Pre- Clustering MVN Test.....................126 4.2.3 Pre- Clustering Detection Results.................................................130...4.2.4 Pre- Clustering Target Influence..................................................134 4.2.5 Statistical Distance Exclusion and Low Contrast...al, 2001] Figure 2.7 ROC Curve Comparison of RX, K-Means, and Bayesian Pre- Clustering Applied to Anomaly Detection [Ashton, 1998] Figure 2.8 ROC
Hydrologic and geochemical data assimilation at the Hanford 300 Area
NASA Astrophysics Data System (ADS)
Chen, X.; Hammond, G. E.; Murray, C. J.; Zachara, J. M.
2012-12-01
In modeling the uranium migration within the Integrated Field Research Challenge (IFRC) site at the Hanford 300 Area, uncertainties arise from both hydrologic and geochemical sources. The hydrologic uncertainty includes the transient flow boundary conditions induced by dynamic variations in Columbia River stage and the underlying heterogeneous hydraulic conductivity field, while the geochemical uncertainty is a result of limited knowledge of the geochemical reaction processes and parameters, as well as heterogeneity in uranium source terms. In this work, multiple types of data, including the results from constant-injection tests, borehole flowmeter profiling, and conservative tracer tests, are sequentially assimilated across scales within a Bayesian framework to reduce the hydrologic uncertainty. The hydrologic data assimilation is then followed by geochemical data assimilation, where the goal is to infer the heterogeneous distribution of uranium sources using uranium breakthrough curves from a desorption test that took place at high spring water table. We demonstrate in our study that Ensemble-based data assimilation techniques (Ensemble Kalman filter and smoother) are efficient in integrating multiple types of data sequentially for uncertainty reduction. The computational demand is managed by using the multi-realization capability within the parallel PFLOTRAN simulator.
Multi-Sensor Information Integration and Automatic Understanding
2008-11-01
also produced a real-time implementation of the tracking and anomalous behavior detection system that runs on real- world data – either using real-time...surveillance and airborne IED detection . 15. SUBJECT TERMS Multi-hypothesis tracking , particle filters, anomalous behavior detection , Bayesian...analyst to support decision making with large data sets. A key feature of the real-time tracking and behavior detection system developed is that the
A Bayesian system to detect and characterize overlapping outbreaks.
Aronis, John M; Millett, Nicholas E; Wagner, Michael M; Tsui, Fuchiang; Ye, Ye; Ferraro, Jeffrey P; Haug, Peter J; Gesteland, Per H; Cooper, Gregory F
2017-09-01
Outbreaks of infectious diseases such as influenza are a significant threat to human health. Because there are different strains of influenza which can cause independent outbreaks, and influenza can affect demographic groups at different rates and times, there is a need to recognize and characterize multiple outbreaks of influenza. This paper describes a Bayesian system that uses data from emergency department patient care reports to create epidemiological models of overlapping outbreaks of influenza. Clinical findings are extracted from patient care reports using natural language processing. These findings are analyzed by a case detection system to create disease likelihoods that are passed to a multiple outbreak detection system. We evaluated the system using real and simulated outbreaks. The results show that this approach can recognize and characterize overlapping outbreaks of influenza. We describe several extensions that appear promising. Copyright © 2017 Elsevier Inc. All rights reserved.
Real-time Bayesian anomaly detection in streaming environmental data
NASA Astrophysics Data System (ADS)
Hill, David J.; Minsker, Barbara S.; Amir, Eyal
2009-04-01
With large volumes of data arriving in near real time from environmental sensors, there is a need for automated detection of anomalous data caused by sensor or transmission errors or by infrequent system behaviors. This study develops and evaluates three automated anomaly detection methods using dynamic Bayesian networks (DBNs), which perform fast, incremental evaluation of data as they become available, scale to large quantities of data, and require no a priori information regarding process variables or types of anomalies that may be encountered. This study investigates these methods' abilities to identify anomalies in eight meteorological data streams from Corpus Christi, Texas. The results indicate that DBN-based detectors, using either robust Kalman filtering or Rao-Blackwellized particle filtering, outperform a DBN-based detector using Kalman filtering, with the former having false positive/negative rates of less than 2%. These methods were successful at identifying data anomalies caused by two real events: a sensor failure and a large storm.
NASA Technical Reports Server (NTRS)
Gai, E. G.; Curry, R. E.
1978-01-01
An investigation of the behavior of the human decisionmaker is described for a task related to the problem of a pilot using a traffic situation display to avoid collisions. This sequential signal detection task is characterized by highly correlated signals with time varying strength. Experimental results are presented and the behavior of the observers is analyzed using the theory of Markov processes and classical signal detection theory. Mathematical models are developed which describe the main result of the experiment: that correlation in sequential signals induced perseveration in the observer response and a strong tendency to repeat their previous decision, even when they were wrong.
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
Fosgate, G T; Motimele, B; Ganswindt, A; Irons, P C
2017-09-15
Accurate diagnosis of pregnancy is an essential component of an effective reproductive management plan for dairy cattle. Indirect methods of pregnancy detection can be performed soon after breeding and offer an advantage over traditional direct methods in not requiring an experienced veterinarian and having potential for automation. The objective of this study was to estimate the sensitivity and specificity of pregnancy-associated glycoprotein (PAG) detection ELISA and transrectal ultrasound (TRUS) in dairy cows of South Africa using a Bayesian latent class approach. Commercial dairy cattle from the five important dairy regions in South Africa were enrolled in a short-term prospective cohort study. Cattle were examined at 28-35days after artificial insemination (AI) and then followed up 14days later. At both sampling times, TRUS was performed to detect pregnancy and commercially available PAG detection ELISAs were performed on collected serum and milk. A total of 1236 cows were sampled and 1006 had complete test information for use in the Bayesian latent class model. The estimated sensitivity (95% probability interval) and specificity for PAG detection serum ELISA were 99.4% (98.5, 99.9) and 97.4% (94.7, 99.2), respectively. The estimated sensitivity and specificity for PAG detection milk ELISA were 99.2% (98.2, 99.8) and 93.4% (89.7, 96.1), respectively. Sensitivity of veterinarian performed TRUS at 28-35days post-AI varied between 77.8% and 90.5% and specificity varied between 94.7% and 99.8%. In summary, indirect detection of pregnancy using PAG ELISA is an accurate method for use in dairy cattle. The method is descriptively more sensitive than veterinarian-performed TRUS and therefore could be an economically viable addition to a reproductive management plan. Copyright © 2017 Elsevier B.V. All rights reserved.
Informative priors on fetal fraction increase power of the noninvasive prenatal screen.
Xu, Hanli; Wang, Shaowei; Ma, Lin-Lin; Huang, Shuai; Liang, Lin; Liu, Qian; Liu, Yang-Yang; Liu, Ke-Di; Tan, Ze-Min; Ban, Hao; Guan, Yongtao; Lu, Zuhong
2017-11-09
PurposeNoninvasive prenatal screening (NIPS) sequences a mixture of the maternal and fetal cell-free DNA. Fetal trisomy can be detected by examining chromosomal dosages estimated from sequencing reads. The traditional method uses the Z-test, which compares a subject against a set of euploid controls, where the information of fetal fraction is not fully utilized. Here we present a Bayesian method that leverages informative priors on the fetal fraction.MethodOur Bayesian method combines the Z-test likelihood and informative priors of the fetal fraction, which are learned from the sex chromosomes, to compute Bayes factors. Bayesian framework can account for nongenetic risk factors through the prior odds, and our method can report individual positive/negative predictive values.ResultsOur Bayesian method has more power than the Z-test method. We analyzed 3,405 NIPS samples and spotted at least 9 (of 51) possible Z-test false positives.ConclusionBayesian NIPS is more powerful than the Z-test method, is able to account for nongenetic risk factors through prior odds, and can report individual positive/negative predictive values.Genetics in Medicine advance online publication, 9 November 2017; doi:10.1038/gim.2017.186.
Application of a data-mining method based on Bayesian networks to lesion-deficit analysis
NASA Technical Reports Server (NTRS)
Herskovits, Edward H.; Gerring, Joan P.
2003-01-01
Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.
Vilar, M J; Ranta, J; Virtanen, S; Korkeala, H
2015-01-01
Bayesian analysis was used to estimate the pig's and herd's true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2-70.9). The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3-82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.
Palmer, Matthew A; Brewer, Neil
2012-06-01
When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.
A comment on priors for Bayesian occupancy models
Gerber, Brian D.
2018-01-01
Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are “uninformative” or “vague”, such priors can easily be unintentionally highly informative. Here we report on how the specification of a “vague” normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts. PMID:29481554
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test
2012-01-01
Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season) and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season), 38−50 (2009-2010 season), weeks 50−9 (2010-2011 season) and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could be applied to other data sets to quickly detect influenza outbreaks. The sequential structure of the test makes it suitable for implementation in many platforms at a low computational cost without requiring to store large data sets. PMID:23031321
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
2002-08-01
the measurement noise, as well as the physical model of the forward scattered electric field. The Bayesian algorithms for the Uncertain Permittivity...received at multiple sensors. In this research project a tissue- model -based signal-detection theory approach for the detection of mammary tumors in the...oriented information processors. In this research project a tissue- model - based signal detection theory approach for the detection of mammary tumors in the
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of abundance of rare and patchily distributed species and is particularly appropriate when sampling in all patches is impossible, but a global estimate of abundance is required.
Anticipating conflict facilitates controlled stimulus-response selection
Correa, Ángel; Rao, Anling; Nobre, Anna C.
2014-01-01
Cognitive control can be triggered in reaction to previous conflict, as suggested by the finding of sequential effects in conflict tasks. Can control also be triggered proactively by presenting cues predicting conflict (‘proactive control’)? We exploited the high temporal resolution of event-related potentials (ERPs) and controlled for sequential effects to ask whether proactive control based on anticipating conflict modulates neural activity related to cognitive control, as may be predicted from the conflict-monitoring model. ERPs associated with conflict detection (N2) were measured during a cued flanker task. Symbolic cues were either informative or neutral with respect to whether the target involved conflicting or congruent responses. Sequential effects were controlled by analysing the congruency of the previous trial. The results showed that cuing conflict facilitated conflict resolution and reduced the N2 latency. Other potentials (frontal N1 and P3) were also modulated by cuing conflict. Cuing effects were most evident after congruent than after incongruent trials. This interaction between cuing and sequential effects suggests neural overlap between the control networks triggered by proactive and reactive signals. This finding clarifies why previous neuroimaging studies, in which reactive sequential effects were not controlled, have rarely found anticipatory effects upon conflict-related activity. Finally, the high temporal resolution of ERPs was critical to reveal a temporal modulation of conflict detection by proactive control. This novel finding suggests that anticipating conflict speeds up conflict detection and resolution. Recent research suggests that this anticipatory mechanism may be mediated by pre-activation of the ACC during the preparatory interval. PMID:18823248
2013-01-01
Background There is a rising public and political demand for prospective cancer cluster monitoring. But there is little empirical evidence on the performance of established cluster detection tests under conditions of small and heterogeneous sample sizes and varying spatial scales, such as are the case for most existing population-based cancer registries. Therefore this simulation study aims to evaluate different cluster detection methods, implemented in the open soure environment R, in their ability to identify clusters of lung cancer using real-life data from an epidemiological cancer registry in Germany. Methods Risk surfaces were constructed with two different spatial cluster types, representing a relative risk of RR = 2.0 or of RR = 4.0, in relation to the overall background incidence of lung cancer, separately for men and women. Lung cancer cases were sampled from this risk surface as geocodes using an inhomogeneous Poisson process. The realisations of the cancer cases were analysed within small spatial (census tracts, N = 1983) and within aggregated large spatial scales (communities, N = 78). Subsequently, they were submitted to the cluster detection methods. The test accuracy for cluster location was determined in terms of detection rates (DR), false-positive (FP) rates and positive predictive values. The Bayesian smoothing models were evaluated using ROC curves. Results With moderate risk increase (RR = 2.0), local cluster tests showed better DR (for both spatial aggregation scales > 0.90) and lower FP rates (both < 0.05) than the Bayesian smoothing methods. When the cluster RR was raised four-fold, the local cluster tests showed better DR with lower FPs only for the small spatial scale. At a large spatial scale, the Bayesian smoothing methods, especially those implementing a spatial neighbourhood, showed a substantially lower FP rate than the cluster tests. However, the risk increases at this scale were mostly diluted by data aggregation. Conclusion High resolution spatial scales seem more appropriate as data base for cancer cluster testing and monitoring than the commonly used aggregated scales. We suggest the development of a two-stage approach that combines methods with high detection rates as a first-line screening with methods of higher predictive ability at the second stage. PMID:24314148
Vehicle detection in aerial surveillance using dynamic Bayesian networks.
Cheng, Hsu-Yung; Weng, Chih-Chia; Chen, Yi-Ying
2012-04-01
We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Prediction of Sybil attack on WSN using Bayesian network and swarm intelligence
NASA Astrophysics Data System (ADS)
Muraleedharan, Rajani; Ye, Xiang; Osadciw, Lisa Ann
2008-04-01
Security in wireless sensor networks is typically sacrificed or kept minimal due to limited resources such as memory and battery power. Hence, the sensor nodes are prone to Denial-of-service attacks and detecting the threats is crucial in any application. In this paper, the Sybil attack is analyzed and a novel prediction method, combining Bayesian algorithm and Swarm Intelligence (SI) is proposed. Bayesian Networks (BN) is used in representing and reasoning problems, by modeling the elements of uncertainty. The decision from the BN is applied to SI forming an Hybrid Intelligence Scheme (HIS) to re-route the information and disconnecting the malicious nodes in future routes. A performance comparison based on the prediction using HIS vs. Ant System (AS) helps in prioritizing applications where decisions are time-critical.
Wang, Xiao; Gu, Jinghua; Hilakivi-Clarke, Leena; Clarke, Robert; Xuan, Jianhua
2017-01-15
The advent of high-throughput DNA methylation profiling techniques has enabled the possibility of accurate identification of differentially methylated genes for cancer research. The large number of measured loci facilitates whole genome methylation study, yet posing great challenges for differential methylation detection due to the high variability in tumor samples. We have developed a novel probabilistic approach, D: ifferential M: ethylation detection using a hierarchical B: ayesian model exploiting L: ocal D: ependency (DM-BLD), to detect differentially methylated genes based on a Bayesian framework. The DM-BLD approach features a joint model to capture both the local dependency of measured loci and the dependency of methylation change in samples. Specifically, the local dependency is modeled by Leroux conditional autoregressive structure; the dependency of methylation changes is modeled by a discrete Markov random field. A hierarchical Bayesian model is developed to fully take into account the local dependency for differential analysis, in which differential states are embedded as hidden variables. Simulation studies demonstrate that DM-BLD outperforms existing methods for differential methylation detection, particularly when the methylation change is moderate and the variability of methylation in samples is high. DM-BLD has been applied to breast cancer data to identify important methylated genes (such as polycomb target genes and genes involved in transcription factor activity) associated with breast cancer recurrence. A Matlab package of DM-BLD is available at http://www.cbil.ece.vt.edu/software.htm CONTACT: Xuan@vt.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Host, Ole; Lahav, Ofer; Abdalla, Filipe B.
We present a showcase for deriving bounds on the neutrino masses from laboratory experiments and cosmological observations. We compare the frequentist and Bayesian bounds on the effective electron neutrino mass m{sub {beta}} which the KATRIN neutrino mass experiment is expected to obtain, using both an analytical likelihood function and Monte Carlo simulations of KATRIN. Assuming a uniform prior in m{sub {beta}}, we find that a null result yields an upper bound of about 0.17 eV at 90% confidence in the Bayesian analysis, to be compared with the frequentist KATRIN reference value of 0.20 eV. This is a significant difference whenmore » judged relative to the systematic and statistical uncertainties of the experiment. On the other hand, an input m{sub {beta}}=0.35 eV, which is the KATRIN 5{sigma} detection threshold, would be detected at virtually the same level. Finally, we combine the simulated KATRIN results with cosmological data in the form of present (post-WMAP) and future (simulated Planck) observations. If an input of m{sub {beta}}=0.2 eV is assumed in our simulations, KATRIN alone excludes a zero neutrino mass at 2.2{sigma}. Adding Planck data increases the probability of detection to a median 2.7{sigma}. The analysis highlights the importance of combining cosmological and laboratory data on an equal footing.« less
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
NASA Astrophysics Data System (ADS)
Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.
2017-05-01
We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.
The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-04-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.
Fosgate, G T; Petzer, I M; Karzis, J
2013-04-01
Screening tests for mastitis can play an important role in proactive mastitis control programs. The primary objective of this study was to compare the sensitivity and specificity of milk electrical conductivity (EC) to the California mastitis test (CMT) in commercial dairy cattle in South Africa using Bayesian methods without a perfect reference test. A total of 1848 quarter milk specimens were collected from 173 cows sampled during six sequential farm visits. Of these samples, 25.8% yielded pathogenic bacterial isolates. The most frequently isolated species were coagulase negative Staphylococci (n=346), Streptococcus agalactiae (n=54), and Staphylococcus aureus (n=42). The overall cow-level prevalence of mastitis was 54% based on the Bayesian latent class (BLC) analysis. The CMT was more accurate than EC for classification of cows having somatic cell counts >200,000/mL and for isolation of a bacterial pathogen. BLC analysis also suggested an overall benefit of CMT over EC but the statistical evidence was not strong (P=0.257). The Bayesian model estimated the sensitivity and specificity of EC (measured via resistance) at a cut-point of >25 mΩ/cm to be 89.9% and 86.8%, respectively. The CMT had a sensitivity and specificity of 94.5% and 77.7%, respectively, when evaluated at the weak positive cut-point. EC was useful for identifying milk specimens harbouring pathogens but was not able to differentiate among evaluated bacterial isolates. Screening tests can be used to improve udder health as part of a proactive management plan. Copyright © 2012 Elsevier Ltd. All rights reserved.
al3c: high-performance software for parameter inference using Approximate Bayesian Computation.
Stram, Alexander H; Marjoram, Paul; Chen, Gary K
2015-11-01
The development of Approximate Bayesian Computation (ABC) algorithms for parameter inference which are both computationally efficient and scalable in parallel computing environments is an important area of research. Monte Carlo rejection sampling, a fundamental component of ABC algorithms, is trivial to distribute over multiple processors but is inherently inefficient. While development of algorithms such as ABC Sequential Monte Carlo (ABC-SMC) help address the inherent inefficiencies of rejection sampling, such approaches are not as easily scaled on multiple processors. As a result, current Bayesian inference software offerings that use ABC-SMC lack the ability to scale in parallel computing environments. We present al3c, a C++ framework for implementing ABC-SMC in parallel. By requiring only that users define essential functions such as the simulation model and prior distribution function, al3c abstracts the user from both the complexities of parallel programming and the details of the ABC-SMC algorithm. By using the al3c framework, the user is able to scale the ABC-SMC algorithm in parallel computing environments for his or her specific application, with minimal programming overhead. al3c is offered as a static binary for Linux and OS-X computing environments. The user completes an XML configuration file and C++ plug-in template for the specific application, which are used by al3c to obtain the desired results. Users can download the static binaries, source code, reference documentation and examples (including those in this article) by visiting https://github.com/ahstram/al3c. astram@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Adaptive designs in clinical trials.
Bowalekar, Suresh
2011-01-01
In addition to the expensive and lengthy process of developing a new medicine, the attrition rate in clinical research was on the rise, resulting in stagnation in the development of new compounds. As a consequence to this, the US Food and Drug Administration released a critical path initiative document in 2004, highlighting the need for developing innovative trial designs. One of the innovations suggested the use of adaptive designs for clinical trials. Thus, post critical path initiative, there is a growing interest in using adaptive designs for the development of pharmaceutical products. Adaptive designs are expected to have great potential to reduce the number of patients and duration of trial and to have relatively less exposure to new drug. Adaptive designs are not new in the sense that the task of interim analysis (IA)/review of the accumulated data used in adaptive designs existed in the past too. However, such reviews/analyses of accumulated data were not necessarily planned at the stage of planning clinical trial and the methods used were not necessarily compliant with clinical trial process. The Bayesian approach commonly used in adaptive designs was developed by Thomas Bayes in the 18th century, about hundred years prior to the development of modern statistical methods by the father of modern statistics, Sir Ronald A. Fisher, but the complexity involved in Bayesian approach prevented its use in real life practice. The advances in the field of computer and information technology over the last three to four decades has changed the scenario and the Bayesian techniques are being used in adaptive designs in addition to other sequential methods used in IA. This paper attempts to describe the various adaptive designs in clinical trial and views of stakeholders about feasibility of using them, without going into mathematical complexities.
The drift diffusion model as the choice rule in reinforcement learning.
Pedersen, Mads Lund; Frank, Michael J; Biele, Guido
2017-08-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.
Formalizing Neurath's ship: Approximate algorithms for online causal learning.
Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A
2017-04-01
Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences
Lindén, Henrik; Lansner, Anders
2016-01-01
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. PMID:27213810
The drift diffusion model as the choice rule in reinforcement learning
Frank, Michael J.
2017-01-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyper-activity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups. PMID:27966103
Bayesian data assimilation provides rapid decision support for vector-borne diseases.
Jewell, Chris P; Brown, Richard G
2015-07-06
Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Although host population data are typically available, for novel disease introductions there is a high chance of the pathogen using a vector for which data are unavailable. This presents a barrier to estimating the parameters of dynamical models representing host-vector-pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds and provide evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.
2012-04-01
Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.
Diagnostic causal reasoning with verbal information.
Meder, Björn; Mayrhofer, Ralf
2017-08-01
In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people's inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., "X occasionally causes A"). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes' prior probabilities and the effects' likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework. Copyright © 2017 Elsevier Inc. All rights reserved.
Tracking Object Existence From an Autonomous Patrol Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael; Scharenbroich, Lucas
2011-01-01
An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.
A-Track: A New Approach for Detection of Moving Objects in FITS Images
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat
2016-07-01
Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.
Dorazio, Robert M
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
Automated Bayesian model development for frequency detection in biological time series.
Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J
2011-06-24
A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure.
Automated Bayesian model development for frequency detection in biological time series
2011-01-01
Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910
R, Elakkiya; K, Selvamani
2017-09-22
Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.
On-line Bayesian model updating for structural health monitoring
NASA Astrophysics Data System (ADS)
Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo
2018-03-01
Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.
A bayesian analysis for identifying DNA copy number variations using a compound poisson process.
Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen
2010-01-01
To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.
NASA Astrophysics Data System (ADS)
Taylor, Stephen; Ellis, Justin; Gair, Jonathan
2014-11-01
We describe several new techniques which accelerate Bayesian searches for continuous gravitational-wave emission from supermassive black-hole binaries using pulsar-timing arrays. These techniques mitigate the problematic increase of search dimensionality with the size of the pulsar array which arises from having to include an extra parameter per pulsar as the array is expanded. This extra parameter corresponds to searching over the phase of the gravitational wave as it propagates past each pulsar so that we can coherently include the pulsar term in our search strategies. Our techniques make the analysis tractable with powerful evidence-evaluation packages like MultiNest. We find good agreement of our techniques with the parameter-estimation and Bayes factor evaluation performed with full signal templates and conclude that these techniques make excellent first-cut tools for detection and characterization of continuous gravitational-wave signals with pulsar-timing arrays. Crucially, at low to moderate signal-to-noise ratios the factor by which the analysis is sped up can be ≳100 , permitting rigorous programs of systematic injection and recovery of signals to establish robust detection criteria within a Bayesian formalism.
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
A Bayesian model for visual space perception
NASA Technical Reports Server (NTRS)
Curry, R. E.
1972-01-01
A model for visual space perception is proposed that contains desirable features in the theories of Gibson and Brunswik. This model is a Bayesian processor of proximal stimuli which contains three important elements: an internal model of the Markov process describing the knowledge of the distal world, the a priori distribution of the state of the Markov process, and an internal model relating state to proximal stimuli. The universality of the model is discussed and it is compared with signal detection theory models. Experimental results of Kinchla are used as a special case.
Evaluating Bayesian Networks' Precision for Detecting Students' Learning Styles
ERIC Educational Resources Information Center
Garcia, Patricio; Amandi, Analia; Schiaffino, Silvia; Campo, Marcelo
2007-01-01
Students are characterized by different learning styles, focusing on different types of information and processing this information in different ways. One of the desirable characteristics of a Web-based education system is that all the students can learn despite their different learning styles. To achieve this goal we have to detect how students…
Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph
NASA Astrophysics Data System (ADS)
Savransky, D.; Groff, T. D.; Kasdin, N. J.
2010-10-01
We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.
Hierarchical models of animal abundance and occurrence
Royle, J. Andrew; Dorazio, R.M.
2006-01-01
Much of animal ecology is devoted to studies of abundance and occurrence of species, based on surveys of spatially referenced sample units. These surveys frequently yield sparse counts that are contaminated by imperfect detection, making direct inference about abundance or occurrence based on observational data infeasible. This article describes a flexible hierarchical modeling framework for estimation and inference about animal abundance and occurrence from survey data that are subject to imperfect detection. Within this framework, we specify models of abundance and detectability of animals at the level of the local populations defined by the sample units. Information at the level of the local population is aggregated by specifying models that describe variation in abundance and detection among sites. We describe likelihood-based and Bayesian methods for estimation and inference under the resulting hierarchical model. We provide two examples of the application of hierarchical models to animal survey data, the first based on removal counts of stream fish and the second based on avian quadrat counts. For both examples, we provide a Bayesian analysis of the models using the software WinBUGS.
Tales from Two Cores: Bayesian Re-Analyses of the Summit Lake and Blue Lakes Pollen Cores
NASA Astrophysics Data System (ADS)
Hall, M.
2016-12-01
Pollen cores from Summit Lake and Blue Lakes in Humboldt Co., Nevada provide palaeoclimatic information for the last 2000 yearsin the NW Great Basin. Summit Lake is in the northern Black Rock Range (41.5 N -119.1 W) and is at an elevation of 1780 m. The Blue Lakes sit at an elevation of 2434 m in the southern Pine Forest Range (41.6 N -118.6 W). The distance between the two lakes is 33.5 km. The cores were originally taken to reconstruct the fire history in the NW Great Basin. In this study, stochastic climate histories are created using a Bayesian methodology as implemented in the Bclim program. This Bayesian approach takes: 1) a multivariate approach based on modern pollen analogs, 2) accounts for the non-linear and non-Gaussian relationship between the climate and the pollen proxy, and 3) accounts for the uncertainties in the radiocarbon record and climate histories. For both cores, the following climatic variables are reported for the last 2 kya: Mean Temperature of the Coldest month (MTCO), Growing Degree Days above 5 Centigrade (GDD5), the ratio of Actual to Potential Evapotranspiration (AET/PET). Because it was sequentially sampled,the Artemesia/Chenopodiaceae ratio (A/C), an indicator of wetness, and the Grasses/Shrubs (G/S) ratio, an indicator of thevegetation communities, is calculated for each section of the Summit Lake core. Bayesian changepoint analyses of the Summit Lake core indicates that there is no significant difference in the mean or variance of the A/C ratio for the last 2 kya cal BP, but there is a significant decrease in G/S ratio dating to circa 700 ya cal BP. At Summit Lake, a statistically significant decrease in the GDD5 occurs at 1.4-1.5 kya cal BP, and a significant increase in the GDD5 occurs for the last 200 ya cal BP. The GDD5 and MTCO for Blue Lakes has a significant increase at 600 ya cal BP, and afterwards decreases in the next century. The regional archaeological record will be discussed in light of these changes.
Persichetti, Maria Flaminia; Solano-Gallego, Laia; Vullo, Angela; Masucci, Marisa; Marty, Pierre; Delaunay, Pascal; Vitale, Fabrizio; Pennisi, Maria Grazia
2017-03-13
Anti-Leishmania antibodies are increasingly investigated in cats for epidemiological studies or for the diagnosis of clinical feline leishmaniosis. The immunofluorescent antibody test (IFAT), the enzyme-linked immunosorbent assay (ELISA) and western blot (WB) are the serological tests more frequently used. The aim of the present study was to assess diagnostic performance of IFAT, ELISA and WB to detect anti-L. infantum antibodies in feline serum samples obtained from endemic (n = 76) and non-endemic (n = 64) areas and from cats affected by feline leishmaniosis (n = 21) by a Bayesian approach without a gold standard. Cut-offs were set at 80 titre for IFAT and 40 ELISA units for ELISA. WB was considered positive in presence of at least a 18 KDa band. Statistical analysis was performed through a written routine with MATLAB software in the Bayesian framework. The latent data and observations from the joint posterior were simulated in the Bayesian approach by an iterative Markov Chain Monte Carlo technique using the Gibbs sampler for estimating sensitivity and specificity of the three tests. The median seroprevalence in the sample used for evaluating the performance of tests was estimated at 0.27 [credible interval (CI) = 0.20-0.34]. The median sensitivity of the three different methods was 0.97 (CI: 0.86-1.00), 0.75 (CI: 0.61-0.87) and 0.70 (CI: 0.56-0.83) for WB, IFAT and ELISA, respectively. Median specificity reached 0.99 (CI: 0.96-1.00) with WB, 0.97 (CI: 0.93-0.99) with IFAT and 0.98 (CI: 0.94-1.00) with ELISA. IFAT was more sensitive than ELISA (75 vs 70%) for the detection of subclinical infection while ELISA was better for diagnosing clinical leishmaniosis when compared with IFAT (98 vs 97%). The overall performance of all serological techniques was good and the most accurate test for anti-Leishmania antibody detection in feline serum samples was WB.
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
1996-05-01
Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.
Duggento, Andrea; Stankovski, Tomislav; McClintock, Peter V E; Stefanovska, Aneta
2012-12-01
Living systems have time-evolving interactions that, until recently, could not be identified accurately from recorded time series in the presence of noise. Stankovski et al. [Phys. Rev. Lett. 109, 024101 (2012)] introduced a method based on dynamical Bayesian inference that facilitates the simultaneous detection of time-varying synchronization, directionality of influence, and coupling functions. It can distinguish unsynchronized dynamics from noise-induced phase slips. The method is based on phase dynamics, with Bayesian inference of the time-evolving parameters being achieved by shaping the prior densities to incorporate knowledge of previous samples. We now present the method in detail using numerically generated data, data from an analog electronic circuit, and cardiorespiratory data. We also generalize the method to encompass networks of interacting oscillators and thus demonstrate its applicability to small-scale networks.
Improved head direction command classification using an optimised Bayesian neural network.
Nguyen, Son T; Nguyen, Hung T; Taylor, Philip B; Middleton, James
2006-01-01
Assistive technologies have recently emerged to improve the quality of life of severely disabled people by enhancing their independence in daily activities. Since many of those individuals have limited or non-existing control from the neck downward, alternative hands-free input modalities have become very important for these people to access assistive devices. In hands-free control, head movement has been proved to be a very effective user interface as it can provide a comfortable, reliable and natural way to access the device. Recently, neural networks have been shown to be useful not only for real-time pattern recognition but also for creating user-adaptive models. Since multi-layer perceptron neural networks trained using standard back-propagation may cause poor generalisation, the Bayesian technique has been proposed to improve the generalisation and robustness of these networks. This paper describes the use of Bayesian neural networks in developing a hands-free wheelchair control system. The experimental results show that with the optimised architecture, classification Bayesian neural networks can detect head commands of wheelchair users accurately irrespective to their levels of injuries.
Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C
2016-02-01
A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.
Non-proliferative diabetic retinopathy symptoms detection and classification using neural network.
Al-Jarrah, Mohammad A; Shatnawi, Hadeel
2017-08-01
Diabetic retinopathy (DR) causes blindness in the working age for people with diabetes in most countries. The increasing number of people with diabetes worldwide suggests that DR will continue to be major contributors to vision loss. Early detection of retinopathy progress in individuals with diabetes is critical for preventing visual loss. Non-proliferative DR (NPDR) is an early stage of DR. Moreover, NPDR can be classified into mild, moderate and severe. This paper proposes a novel morphology-based algorithm for detecting retinal lesions and classifying each case. First, the proposed algorithm detects the three DR lesions, namely haemorrhages, microaneurysms and exudates. Second, we defined and extracted a set of features from detected lesions. The set of selected feature emulates what physicians looked for in classifying NPDR case. Finally, we designed an artificial neural network (ANN) classifier with three layers to classify NPDR to normal, mild, moderate and severe. Bayesian regularisation and resilient backpropagation algorithms are used to train ANN. The accuracy for the proposed classifiers based on Bayesian regularisation and resilient backpropagation algorithms are 96.6 and 89.9, respectively. The obtained results are compared with results of the recent published classifier. Our proposed classifier outperforms the best in terms of sensitivity and specificity.
Ale, Cesar E; Farías, Marta E; Strasser de Saad, Ana M; Pasteris, Sergio E
2014-07-01
Growth and fermentation patterns of Saccharomyces cerevisiae, Kloeckera apiculata, and Oenococcus oeni strains cultured in grape juice medium were studied. In pure, sequential and simultaneous cultures, the strains reached the stationary growth phase between 2 and 3 days. Pure and mixed K. apiculata and S. cerevisiae cultures used mainly glucose, producing ethanol, organic acids, and 4.0 and 0.1 mM glycerol, respectively. In sequential cultures, O. oeni achieved about 1 log unit at 3 days using mainly fructose and L-malic acid. Highest sugars consumption was detected in K. apiculata supernatants, lactic acid being the major end-product. 8.0 mM glycerol was found in 6-day culture supernatants. In simultaneous cultures, total sugars and L-malic acid were used at 3 days and 98% of ethanol and glycerol were detected. This study represents the first report of the population dynamics and metabolic behavior of yeasts and O. oeni in sequential and simultaneous cultures and contributes to the selection of indigenous strains to design starter cultures for winemaking, also considering the inclusion of K. apiculata. The sequential inoculation of yeasts and O. oeni would enhance glycerol production, which confers desirable organoleptic characteristics to wines, while organic acids levels would not affect their sensory profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spatial cluster detection using dynamic programming.
Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F
2012-03-25
The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.
Spatial cluster detection using dynamic programming
2012-01-01
Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Analysis and Synthesis of Load Forecasting Data for Renewable Integration Studies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steckler, N.; Florita, A.; Zhang, J.
2013-11-01
As renewable energy constitutes greater portions of the generation fleet, the importance of modeling uncertainty as part of integration studies also increases. In pursuit of optimal system operations, it is important to capture not only the definitive behavior of power plants, but also the risks associated with systemwide interactions. This research examines the dependence of load forecast errors on external predictor variables such as temperature, day type, and time of day. The analysis was utilized to create statistically relevant instances of sequential load forecasts with only a time series of historic, measured load available. The creation of such load forecastsmore » relies on Bayesian techniques for informing and updating the model, thus providing a basis for networked and adaptive load forecast models in future operational applications.« less
Facile Determination of Sodium Ion and Osmolarity in Artificial Tears by Sequential DNAzymes.
Kim, Eun Hye; Lee, Eun-Song; Lee, Dong Yun; Kim, Young-Pil
2017-12-07
Despite high relevance of tear osmolarity and eye abnormality, numerous methods for detecting tear osmolarity rely upon expensive osmometers. We report a reliable method for simply determining sodium ion-based osmolarity in artificial tears using sequential DNAzymes. When sodium ion-specific DNAzyme and peroxidase-like DNAzyme were used as a sensing and detecting probe, respectively, the concentration of Na⁺ in artificial tears could be measured by absorbance or fluorescence intensity, which was highly correlated with osmolarity over the diagnostic range ( R ² > 0.98). Our approach is useful for studying eye diseases in relation to osmolarity.
Unsupervised Sequential Outlier Detection With Deep Architectures.
Lu, Weining; Cheng, Yu; Xiao, Cao; Chang, Shiyu; Huang, Shuai; Liang, Bin; Huang, Thomas
2017-09-01
Unsupervised outlier detection is a vital task and has high impact on a wide variety of applications domains, such as image analysis and video surveillance. It also gains long-standing attentions and has been extensively studied in multiple research areas. Detecting and taking action on outliers as quickly as possible are imperative in order to protect network and related stakeholders or to maintain the reliability of critical systems. However, outlier detection is difficult due to the one class nature and challenges in feature construction. Sequential anomaly detection is even harder with more challenges from temporal correlation in data, as well as the presence of noise and high dimensionality. In this paper, we introduce a novel deep structured framework to solve the challenging sequential outlier detection problem. We use autoencoder models to capture the intrinsic difference between outliers and normal instances and integrate the models to recurrent neural networks that allow the learning to make use of previous context as well as make the learners more robust to warp along the time axis. Furthermore, we propose to use a layerwise training procedure, which significantly simplifies the training procedure and hence helps achieve efficient and scalable training. In addition, we investigate a fine-tuning step to update all parameters set by incorporating the temporal correlation in the sequence. We further apply our proposed models to conduct systematic experiments on five real-world benchmark data sets. Experimental results demonstrate the effectiveness of our model, compared with other state-of-the-art approaches.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Gupta, Vinod Kumar; Mergu, Naveen; Kumawat, Lokesh Kumar; Singh, Ashok Kumar
2015-11-01
A new rhodamine functionalized fluorogenic Schiff base CS was synthesized and its colorimetric and fluorescence responses toward various metal ions were explored. The sensor exhibited highly selective and sensitive colorimetric and "off-on" fluorescence response towards Al(3+) in the presence of other competing metal ions. These spectral changes are large enough in the visible region of the spectrum and thus enable naked-eye detection. Studies proved that the formation of CS-Al(3+) complex is fully reversible and can sense to AcO(-)/F(-) via dissociation. The results revealed that the sensor provides fluorescence "off-on-off" strategy for the sequential detection of Al(3+) and AcO(-)/F(-). Copyright © 2015 Elsevier B.V. All rights reserved.
A fast Bayesian approach to discrete object detection in astronomical data sets - PowellSnakes I
NASA Astrophysics Data System (ADS)
Carvalho, Pedro; Rocha, Graça; Hobson, M. P.
2009-03-01
A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of `100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercel, S.W.
1999-11-07
For several reasons, Bayesian parameter estimation is superior to other methods for inductively learning a model for an anticipatory system. Since it exploits prior knowledge, the analysis begins from a more advantageous starting point than other methods. Also, since "nuisance parameters" can be removed from the Bayesian analysis, the description of the model need not be as complete as is necessary for such methods as matched filtering. In the limit of perfectly random noise and a perfect description of the model, the signal-to-noise ratio improves as the square root of the number of samples in the data. Even with themore » imperfections of real-world data, Bayesian methods approach this ideal limit of performance more closely than other methods. These capabilities provide a strategy for addressing a major unsolved problem in pump operation: the identification of precursors of cavitation. Cavitation causes immediate degradation of pump performance and ultimate destruction of the pump. However, the most efficient point to operate a pump is just below the threshold of cavitation. It might be hoped that a straightforward method to minimize pump cavitation damage would be to simply adjust the operating point until the inception of cavitation is detected and then to slightly readjust the operating point to let the cavitation vanish. However, due to the continuously evolving state of the fluid moving through the pump, the threshold of cavitation tends to wander. What is needed is to anticipate cavitation, and this requires the detection and identification of precursor features that occur just before cavitation starts.« less
Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel
2017-01-01
Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Herman, Krisztian; Szabó, László; Leopold, Loredana F; Chiş, Vasile; Leopold, Nicolae
2011-05-01
A new, simple, and effective approach for multianalyte sequential surface-enhanced Raman scattering (SERS) detection in a flow cell is reported. The silver substrate was prepared in situ by laser-induced photochemical synthesis. By focusing the laser on the 320 μm inner diameter glass capillary at 0.5 ml/min continuous flow of 1 mM silver nitrate and 10 mM sodium citrate mixture, a SERS active silver spot on the inner wall of the glass capillary was prepared in a few seconds. The test analytes, dacarbazine, 4-(2-pyridylazo)resorcinol (PAR) complex with Cu(II), and amoxicillin, were sequentially injected into the flow cell. Each analyte was adsorbed to the silver surface, enabling the recording of high intensity SERS spectra even at 2 s integration times, followed by desorption from the silver surface and being washed away from the capillary. Before and after each analyte passed the detection window, citrate background spectra were recorded, and thus, no "memory effects" perturbed the SERS detection. A good reproducibility of the SERS spectra obtained under flow conditions was observed. The laser-induced photochemically synthesized silver substrate enables high Raman enhancement, is characterized by fast preparation with a high success rate, and represents a valuable alternative for silver colloids as SERS substrate in flow approaches.
Dobolyi, David G; Dodson, Chad S
2013-12-01
Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Visual perception as retrospective Bayesian decoding from high- to low-level features
Ding, Stephanie; Cueva, Christopher J.; Tsodyks, Misha; Qian, Ning
2017-01-01
When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. PMID:29073108
Satínský, Dalibor; Huclová, Jitka; Ferreira, Raquel L C; Montenegro, Maria Conceição B S M; Solich, Petr
2006-02-13
The porous monolithic columns show high performance at relatively low pressure. The coupling of short monoliths with sequential injection technique (SIA) results in a new approach to implementation of separation step to non-separation low-pressure method. In this contribution, a new separation method for simultaneous determination of ambroxol, methylparaben and benzoic acid was developed based on a novel reversed-phase sequential injection chromatography (SIC) technique with UV detection. A Chromolith SpeedROD RP-18e, 50-4.6 mm column with 10 mm precolumn and a FIAlab 3000 system with a six-port selection valve and 5 ml syringe were used for sequential injection chromatographic separations in our study. The mobile phase used was acetonitrile-tetrahydrofuran-0.05M acetic acid (10:10:90, v/v/v), pH 3.75 adjusted with triethylamine, flow rate 0.48 mlmin(-1), UV-detection was at 245 nm. The analysis time was <11 min. A new SIC method was validated and compared with HPLC. The method was found to be useful for the routine analysis of the active compounds ambroxol and preservatives (methylparaben or benzoic acid) in various pharmaceutical syrups and drops.
Why Are People Bad at Detecting Randomness? A Statistical Argument
ERIC Educational Resources Information Center
Williams, Joseph J.; Griffiths, Thomas L.
2013-01-01
Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent statistical difficulty of the task. Our account is based on a Bayesian statistical analysis, focusing on the fact that a random process is a special case of…
USDA-ARS?s Scientific Manuscript database
An algorithm using a Bayesian classifier was developed to automatically detect olive fruit fly infestations in x-ray images of olives. The data set consisted of 249 olives with various degrees of infestation and 161 non-infested olives. Each olive was x-rayed on film and digital images were acquired...
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Acoustic emission based damage localization in composites structures using Bayesian identification
NASA Astrophysics Data System (ADS)
Kundu, A.; Eaton, M. J.; Al-Jumali, S.; Sikdar, S.; Pullin, R.
2017-05-01
Acoustic emission based damage detection in composite structures is based on detection of ultra high frequency packets of acoustic waves emitted from damage sources (such as fibre breakage, fatigue fracture, amongst others) with a network of distributed sensors. This non-destructive monitoring scheme requires solving an inverse problem where the measured signals are linked back to the location of the source. This in turn enables rapid deployment of mitigative measures. The presence of significant amount of uncertainty associated with the operating conditions and measurements makes the problem of damage identification quite challenging. The uncertainties stem from the fact that the measured signals are affected by the irregular geometries, manufacturing imprecision, imperfect boundary conditions, existing damages/structural degradation, amongst others. This work aims to tackle these uncertainties within a framework of automated probabilistic damage detection. The method trains a probabilistic model of the parametrized input and output model of the acoustic emission system with experimental data to give probabilistic descriptors of damage locations. A response surface modelling the acoustic emission as a function of parametrized damage signals collected from sensors would be calibrated with a training dataset using Bayesian inference. This is used to deduce damage locations in the online monitoring phase. During online monitoring, the spatially correlated time data is utilized in conjunction with the calibrated acoustic emissions model to infer the probabilistic description of the acoustic emission source within a hierarchical Bayesian inference framework. The methodology is tested on a composite structure consisting of carbon fibre panel with stiffeners and damage source behaviour has been experimentally simulated using standard H-N sources. The methodology presented in this study would be applicable in the current form to structural damage detection under varying operational loads and would be investigated in future studies.
Estimating the hatchery fraction of a natural population: a Bayesian approach
Barber, Jarrett J.; Gerow, Kenneth G.; Connolly, Patrick J.; Singh, Sarabdeep
2011-01-01
There is strong and growing interest in estimating the proportion of hatchery fish that are in a natural population (the hatchery fraction). In a sample of fish from the relevant population, some are observed to be marked, indicating their origin as hatchery fish. The observed proportion of marked fish is usually less than the actual hatchery fraction, since the observed proportion is determined by the proportion originally marked, differential survival (usually lower) of marked fish relative to unmarked hatchery fish, and rates of mark retention and detection. Bayesian methods can work well in a setting such as this, in which empirical data are limited but for which there may be considerable expert judgment regarding these values. We explored a Bayesian estimation of the hatchery fraction using Monte Carlo–Markov chain methods. Based on our findings, we created an interactive Excel tool to implement the algorithm, which we have made available for free.
NASA Astrophysics Data System (ADS)
Duggento, Andrea; Stankovski, Tomislav; McClintock, Peter V. E.; Stefanovska, Aneta
2012-12-01
Living systems have time-evolving interactions that, until recently, could not be identified accurately from recorded time series in the presence of noise. Stankovski [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.024101 109, 024101 (2012)] introduced a method based on dynamical Bayesian inference that facilitates the simultaneous detection of time-varying synchronization, directionality of influence, and coupling functions. It can distinguish unsynchronized dynamics from noise-induced phase slips. The method is based on phase dynamics, with Bayesian inference of the time-evolving parameters being achieved by shaping the prior densities to incorporate knowledge of previous samples. We now present the method in detail using numerically generated data, data from an analog electronic circuit, and cardiorespiratory data. We also generalize the method to encompass networks of interacting oscillators and thus demonstrate its applicability to small-scale networks.
Bayesian median regression for temporal gene expression data
NASA Astrophysics Data System (ADS)
Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.
2007-09-01
Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.
Using statistical anomaly detection models to find clinical decision support malfunctions.
Ray, Soumi; McEvoy, Dustin S; Aaron, Skye; Hickman, Thu-Trang; Wright, Adam
2018-05-11
Malfunctions in Clinical Decision Support (CDS) systems occur due to a multitude of reasons, and often go unnoticed, leading to potentially poor outcomes. Our goal was to identify malfunctions within CDS systems. We evaluated 6 anomaly detection models: (1) Poisson Changepoint Model, (2) Autoregressive Integrated Moving Average (ARIMA) Model, (3) Hierarchical Divisive Changepoint (HDC) Model, (4) Bayesian Changepoint Model, (5) Seasonal Hybrid Extreme Studentized Deviate (SHESD) Model, and (6) E-Divisive with Median (EDM) Model and characterized their ability to find known anomalies. We analyzed 4 CDS alerts with known malfunctions from the Longitudinal Medical Record (LMR) and Epic® (Epic Systems Corporation, Madison, WI, USA) at Brigham and Women's Hospital, Boston, MA. The 4 rules recommend lead testing in children, aspirin therapy in patients with coronary artery disease, pneumococcal vaccination in immunocompromised adults and thyroid testing in patients taking amiodarone. Poisson changepoint, ARIMA, HDC, Bayesian changepoint and the SHESD model were able to detect anomalies in an alert for lead screening in children and in an alert for pneumococcal conjugate vaccine in immunocompromised adults. EDM was able to detect anomalies in an alert for monitoring thyroid function in patients on amiodarone. Malfunctions/anomalies occur frequently in CDS alert systems. It is important to be able to detect such anomalies promptly. Anomaly detection models are useful tools to aid such detections.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Bluemel, Christina; Krebs, Markus; Polat, Bülent; Linke, Fränze; Eiber, Matthias; Samnick, Samuel; Lapa, Constantin; Lassmann, Michael; Riedmiller, Hubertus; Czernin, Johannes; Rubello, Domenico; Bley, Thorsten; Kropf, Saskia; Wester, Hans-Juergen; Buck, Andreas K; Herrmann, Ken
2016-07-01
Investigating the value of Ga-PSMA-PET/CT in biochemically recurring prostate cancer patients with negative F-choline-PET/CT. One hundred thirty-nine consecutive patients with biochemical recurrence after curative (surgery and/or radiotherapy) therapy were offered participation in this sequential clinical imaging approach. Patients first underwent an F-choline-PET/CT. If negative, an additional Ga-PSMA-PET/CT was offered. One hundred twenty-five of 139 eligible patients were included in the study; 32 patients underwent additional Ga-PSMA-PET/CT. Patients with equivocal findings (n = 5) on F-choline-PET/CT and those who declined the additional Ga-PSMA-PET/CT (n = 9) were excluded. Images were analyzed visually for the presence of suspicious lesions. Findings on PET/CT were correlated with PSA level, PSA doubling time (dt), and PSA velocity (vel). The overall detection rates were 85.6% (107/125) for the sequential imaging approach and 74.4% (93/125) for F-choline-PET/CT alone. Ga-PSMA-PET/CT detected sites of recurrence in 43.8% (14/32) of the choline-negative patients. Detection rates of the sequential imaging approach and F-choline-PET/CT alone increased with higher serum PSA levels and PSA vel. Subgroup analysis of Ga-PSMA-PET/CT in F-choline negative patients revealed detection rates of 28.6%, 45.5%, and 71.4% for PSA levels of 0.2 or greater to less than 1 ng/mL, 1 to 2 ng/mL, and greater than 2 ng/mL, respectively. The sequential imaging approach designed to limit Ga-PSMA imaging to patients with negative choline scans resulted in high detection rates. Ga-PSMA-PET/CT identified sites of recurrent disease in 43.8% of the patients with negative F-choline PET/CT scans.
Bluemel, Christina; Krebs, Markus; Polat, Bülent; Linke, Fränze; Eiber, Matthias; Samnick, Samuel; Lapa, Constantin; Lassmann, Michael; Riedmiller, Hubertus; Czernin, Johannes; Rubello, Domenico; Bley, Thorsten; Kropf, Saskia; Wester, Hans-Juergen; Buck, Andreas K.; Herrmann, Ken
2016-01-01
Purpose Investigating the value of 68Ga-PSMA-PET/CT in biochemically recurring prostate cancer patients with negative 18F-choline-PET/CT. Patients and Methods One hundred thirty-nine consecutive patients with biochemical recurrence after curative (surgery and/or radiotherapy) therapy were offered participation in this sequential clinical imaging approach. Patients first underwent an 18F-choline-PET/CT. If negative, an additional 68Ga-PSMA-PET/CTwas offered. One hundred twenty-five of 139 eligible patients were included in the study; 32 patients underwent additional 68Ga-PSMA-PET/CT. Patients with equivocal findings (n = 5) on 18F-choline-PET/CT and those who declined the additional 68Ga-PSMA-PET/CT (n = 9) were excluded. Images were analyzed visually for the presence of suspicious lesions. Findings on PET/CT were correlated with PSA level, PSA doubling time (dt), and PSA velocity (vel). Results The overall detection rates were 85.6% (107/125) for the sequential imaging approach and 74.4% (93/125) for 18F-choline-PET/CT alone. 68Ga-PSMA-PET/CT detected sites of recurrence in 43.8% (14/32) of the choline-negative patients. Detection rates of the sequential imaging approach and 18F-choline-PET/CT alone increased with higher serum PSA levels and PSA vel. Subgroup analysis of 68Ga-PSMA-PET/CT in 18F-choline negative patients revealed detection rates of 28.6%, 45.5%, and 71.4% for PSA levels of 0.2 or greater to less than 1 ng/mL, 1 to 2 ng/mL, and greater than 2 ng/mL, respectively. Conclusions The sequential imaging approach designed to limit 68Ga-PSMA imaging to patients with negative choline scans resulted in high detection rates. 68Ga-PSMA-PET/CT identified sites of recurrent disease in 43.8% of the patients with negative 18F-choline PET/CT scans. PMID:26975008
Prion Amplification and Hierarchical Bayesian Modeling Refine Detection of Prion Infection
NASA Astrophysics Data System (ADS)
Wyckoff, A. Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J.; Pulford, Bruce; Wild, Margaret; Antolin, Michael; Vercauteren, Kurt; Zabel, Mark
2015-02-01
Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.
Prion amplification and hierarchical Bayesian modeling refine detection of prion infection.
Wyckoff, A Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J; Pulford, Bruce; Wild, Margaret; Antolin, Michael; VerCauteren, Kurt; Zabel, Mark
2015-02-10
Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.
A Bayesian blind survey for cold molecular gas in the Universe
NASA Astrophysics Data System (ADS)
Lentati, L.; Carilli, C.; Alexander, P.; Walter, F.; Decarli, R.
2014-10-01
A new Bayesian method for performing an image domain search for line-emitting galaxies is presented. The method uses both spatial and spectral information to robustly determine the source properties, employing either simple Gaussian, or other physically motivated models whilst using the evidence to determine the probability that the source is real. In this paper, we describe the method, and its application to both a simulated data set, and a blind survey for cold molecular gas using observations of the Hubble Deep Field-North taken with the Plateau de Bure Interferometer. We make a total of six robust detections in the survey, five of which have counterparts in other observing bands. We identify the most secure detections found in a previous investigation, while finding one new probable line source with an optical ID not seen in the previous analysis. This study acts as a pilot application of Bayesian statistics to future searches to be carried out both for low-J CO transitions of high-redshift galaxies using the Jansky Very Large Array (JVLA), and at millimetre wavelengths with Atacama Large Millimeter/submillimeter Array (ALMA), enabling the inference of robust scientific conclusions about the history of the molecular gas properties of star-forming galaxies in the Universe through cosmic time.
Production of single heavy charged leptons at a linear collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Pree, Erin; Sher, Marc; Turan, Ismail
2008-05-01
A sequential fourth generation of quarks and leptons is allowed by precision electroweak constraints if the mass splitting between the heavy quarks is between 50 and 80 GeV. Although heavy quarks can be easily detected at the LHC, it is very difficult to detect a sequential heavy charged lepton, L, due to large backgrounds. Should the L mass be above 250 GeV, it cannot be pair-produced at a 500 GeV ILC. We calculate the cross section for the one-loop process e{sup +}e{sup -}{yields}L{tau}. Although the cross section is small, it may be detectable. We also consider contributions from the two-Higgsmore » doublet model and the Randall-Sundrum model, in which case the cross section can be substantially higher.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Liu, S; Kalet, A
Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.A.; Christie, M.J.; Sandler, M.P.
1988-08-01
Preoperative exclusion or confirmation of periprosthetic infection is essential for correct surgical management of patients with suspected infected joint prostheses. The sensitivity and specificity of (/sup 111/In)WBC imaging in the diagnosis of infected total joint prostheses was examined in 28 patients and compared with sequential (/sup 99m/Tc)HDP/(/sup 111/In)WBC scintigraphy and aspiration arthrography. The sensitivity of preoperative aspiration cultures was 12%, with a specificity of 81% and an accuracy of 58%. The sensitivity of (/sup 111/In)WBC imaging alone was 100%, with a specificity of 50% and an accuracy of 65%. When correlated with the bone scintigraphy and read as sequential (/supmore » 99m/Tc)HDP/(/sup 111/In)WBC imaging, the sensitivity was 88%, specificity 95%, and accuracy 93%. This study demonstrates that (/sup 111/In)WBC imaging is an extremely sensitive imaging modality for the detection of occult infection of joint prostheses. It also demonstrates the necessity of correlating (/sup 111/In)WBC images with (/sup 99m/Tc)HDP skeletal scintigraphy in the detection of occult periprosthetic infection.« less
The impact of eyewitness identifications from simultaneous and sequential lineups.
Wright, Daniel B
2007-10-01
Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.
Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes.
Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro
2016-09-30
In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal "invariant features" is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a "change map", which can be accomplished by means of the CDI's informational content. For this purpose, information metrics such as the Shannon Entropy and "Specific Information" have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf's) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances.
None of the above: A Bayesian account of the detection of novel categories.
Navarro, Daniel J; Kemp, Charles
2017-10-01
Every time we encounter a new object, action, or event, there is some chance that we will need to assign it to a novel category. We describe and evaluate a class of probabilistic models that detect when an object belongs to a category that has not previously been encountered. The models incorporate a prior distribution that is influenced by the distribution of previous objects among categories, and we present 2 experiments that demonstrate that people are also sensitive to this distributional information. Two additional experiments confirm that distributional information is combined with similarity when both sources of information are available. We compare our approach to previous models of unsupervised categorization and to several heuristic-based models, and find that a hierarchical Bayesian approach provides the best account of our data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Intuitive Logic Revisited: New Data and a Bayesian Mixed Model Meta-Analysis
Singmann, Henrik; Klauer, Karl Christoph; Kellen, David
2014-01-01
Recent research on syllogistic reasoning suggests that the logical status (valid vs. invalid) of even difficult syllogisms can be intuitively detected via differences in conceptual fluency between logically valid and invalid syllogisms when participants are asked to rate how much they like a conclusion following from a syllogism (Morsanyi & Handley, 2012). These claims of an intuitive logic are at odds with most theories on syllogistic reasoning which posit that detecting the logical status of difficult syllogisms requires effortful and deliberate cognitive processes. We present new data replicating the effects reported by Morsanyi and Handley, but show that this effect is eliminated when controlling for a possible confound in terms of conclusion content. Additionally, we reanalyze three studies () without this confound with a Bayesian mixed model meta-analysis (i.e., controlling for participant and item effects) which provides evidence for the null-hypothesis and against Morsanyi and Handley's claim. PMID:24755777
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Optical+Near-IR Bayesian Classification of Quasars
NASA Astrophysics Data System (ADS)
Mehta, Sajjan S.; Richards, G. T.; Myers, A. D.
2011-05-01
We describe the details of an optimal Bayesian classification of quasars with combined optical+near-IR photometry from the SDSS and UKIDSS LAS surveys. Using only deep co-added SDSS photometry from the "Stripe 82" region and requiring full four-band UKIDSS detections, we reliably identify 2665 quasar candidates with a computed efficiency in excess of 99%. Relaxing the data constraints to combinations of two-band detections yields up to 6424 candidates with minimal trade-off in completeness and efficiency. The completeness and efficiency of the sample are investigated with existing spectra from the SDSS, 2SLAQ, and AUS surveys in addition to recent single-slit observations from Palomar Observatory, which revealed 22 quasars from a subsample of 29 high-z candidates. SDSS-III/BOSS observations will allow further exploration of the completeness/efficiency of the sample over 2.2
2010-01-01
Background Methods for the calculation and application of quantitative electromyographic (EMG) statistics for the characterization of EMG data detected from forearm muscles of individuals with and without pain associated with repetitive strain injury are presented. Methods A classification procedure using a multi-stage application of Bayesian inference is presented that characterizes a set of motor unit potentials acquired using needle electromyography. The utility of this technique in characterizing EMG data obtained from both normal individuals and those presenting with symptoms of "non-specific arm pain" is explored and validated. The efficacy of the Bayesian technique is compared with simple voting methods. Results The aggregate Bayesian classifier presented is found to perform with accuracy equivalent to that of majority voting on the test data, with an overall accuracy greater than 0.85. Theoretical foundations of the technique are discussed, and are related to the observations found. Conclusions Aggregation of motor unit potential conditional probability distributions estimated using quantitative electromyographic analysis, may be successfully used to perform electrodiagnostic characterization of "non-specific arm pain." It is expected that these techniques will also be able to be applied to other types of electrodiagnostic data. PMID:20156353
Bayesian reconstruction and use of anatomical a priori information for emission tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowsher, J.E.; Johnson, V.E.; Turkington, T.G.
1996-10-01
A Bayesian method is presented for simultaneously segmenting and reconstructing emission computed tomography (ECT) images and for incorporating high-resolution, anatomical information into those reconstructions. The anatomical information is often available from other imaging modalities such as computed tomography (CT) or magnetic resonance imaging (MRI). The Bayesian procedure models the ECT radiopharmaceutical distribution as consisting of regions, such that radiopharmaceutical activity is similar throughout each region. It estimates the number of regions, the mean activity of each region, and the region classification and mean activity of each voxel. Anatomical information is incorporated by assigning higher prior probabilities to ECT segmentations inmore » which each ECT region stays within a single anatomical region. This approach is effective because anatomical tissue type often strongly influences radiopharmaceutical uptake. The Bayesian procedure is evaluated using physically acquired single-photon emission computed tomography (SPECT) projection data and MRI for the three-dimensional (3-D) Hoffman brain phantom. A clinically realistic count level is used. A cold lesion within the brain phantom is created during the SPECT scan but not during the MRI to demonstrate that the estimation procedure can detect ECT structure that is not present anatomically.« less
Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji
2008-12-01
An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.
NASA Technical Reports Server (NTRS)
Wightman, J. M.
1973-01-01
Sequential band-6 imagery of the Zambesi Basin of southern Africa recorded substantial changes in burn patterns resulting from late dry season grass fires. One example from northern Botswana, indicates that a fire consumed approximately 70 square miles of grassland over a 24-hour period. Another example from western Zambia indicates increased fire activity over a 19-day period. Other examples clearly define the area of widespread grass fires in Angola, Botswana, Rhodesia and Zambia. From the fire patterns visible on the sequential portions of the imagery, and the time intervals involved, the rates of spread of the fires are estimated and compared with estimates derived from experimental burning plots in Zambia and Canada. It is concluded that sequential ERTS-1 imagery, of the quality studied, clearly provides the information needed to detect and map grass fires and to monitor their rates of spread in this region during the late dry season.
Perceptual Grouping Affects Pitch Judgments Across Time and Frequency
Borchert, Elizabeth M. O.; Micheyl, Christophe; Oxenham, Andrew J.
2010-01-01
Pitch, the perceptual correlate of fundamental frequency (F0), plays an important role in speech, music and animal vocalizations. Changes in F0 over time help define musical melodies and speech prosody, while comparisons of simultaneous F0 are important for musical harmony, and for segregating competing sound sources. This study compared listeners’ ability to detect differences in F0 between pairs of sequential or simultaneous tones that were filtered into separate, non-overlapping spectral regions. The timbre differences induced by filtering led to poor F0 discrimination in the sequential, but not the simultaneous, conditions. Temporal overlap of the two tones was not sufficient to produce good performance; instead performance appeared to depend on the two tones being integrated into the same perceptual object. The results confirm the difficulty of comparing the pitches of sequential sounds with different timbres and suggest that, for simultaneous sounds, pitch differences may be detected through a decrease in perceptual fusion rather than an explicit coding and comparison of the underlying F0s. PMID:21077719
Diard, Julien; Rynik, Vincent; Lorenceau, Jean
2013-01-01
This research involves a novel apparatus, in which the user is presented with an illusion inducing visual stimulus. The user perceives illusory movement that can be followed by the eye, so that smooth pursuit eye movements can be sustained in arbitrary directions. Thus, free-flow trajectories of any shape can be traced. In other words, coupled with an eye-tracking device, this apparatus enables "eye writing," which appears to be an original object of study. We adapt a previous model of reading and writing to this context. We describe a probabilistic model called the Bayesian Action-Perception for Eye On-Line model (BAP-EOL). It encodes probabilistic knowledge about isolated letter trajectories, their size, high-frequency components of the produced trajectory, and pupil diameter. We show how Bayesian inference, in this single model, can be used to solve several tasks, like letter recognition and novelty detection (i.e., recognizing when a presented character is not part of the learned database). We are interested in the potential use of the eye writing apparatus by motor impaired patients: the final task we solve by Bayesian inference is disability assessment (i.e., measuring and tracking the evolution of motor characteristics of produced trajectories). Preliminary experimental results are presented, which illustrate the method, showing the feasibility of character recognition in the context of eye writing. We then show experimentally how a model of the unknown character can be used to detect trajectories that are likely to be new symbols, and how disability assessment can be performed by opportunistically observing characteristics of fine motor control, as letter are being traced. Experimental analyses also help identify specificities of eye writing, as compared to handwriting, and the resulting technical challenges.
Diard, Julien; Rynik, Vincent; Lorenceau, Jean
2013-01-01
This research involves a novel apparatus, in which the user is presented with an illusion inducing visual stimulus. The user perceives illusory movement that can be followed by the eye, so that smooth pursuit eye movements can be sustained in arbitrary directions. Thus, free-flow trajectories of any shape can be traced. In other words, coupled with an eye-tracking device, this apparatus enables “eye writing,” which appears to be an original object of study. We adapt a previous model of reading and writing to this context. We describe a probabilistic model called the Bayesian Action-Perception for Eye On-Line model (BAP-EOL). It encodes probabilistic knowledge about isolated letter trajectories, their size, high-frequency components of the produced trajectory, and pupil diameter. We show how Bayesian inference, in this single model, can be used to solve several tasks, like letter recognition and novelty detection (i.e., recognizing when a presented character is not part of the learned database). We are interested in the potential use of the eye writing apparatus by motor impaired patients: the final task we solve by Bayesian inference is disability assessment (i.e., measuring and tracking the evolution of motor characteristics of produced trajectories). Preliminary experimental results are presented, which illustrate the method, showing the feasibility of character recognition in the context of eye writing. We then show experimentally how a model of the unknown character can be used to detect trajectories that are likely to be new symbols, and how disability assessment can be performed by opportunistically observing characteristics of fine motor control, as letter are being traced. Experimental analyses also help identify specificities of eye writing, as compared to handwriting, and the resulting technical challenges. PMID:24273525
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus
2016-04-01
Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.
NASA Astrophysics Data System (ADS)
Liang, Yu-Li
Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory detection rate by using facial features and skin color model. To harness all the features in the scene, we further developed another system using multiple types of local descriptors along with Bag-of-Visual Word framework. In addition, an investigation of new contour feature in detecting obscene content is presented.
Tractable Experiment Design via Mathematical Surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less
Oh, Sunghee; Song, Seongho
2017-01-01
In gene expression profile, data analysis pipeline is categorized into four levels, major downstream tasks, i.e., (1) identification of differential expression; (2) clustering co-expression patterns; (3) classification of subtypes of samples; and (4) detection of genetic regulatory networks, are performed posterior to preprocessing procedure such as normalization techniques. To be more specific, temporal dynamic gene expression data has its inherent feature, namely, two neighboring time points (previous and current state) are highly correlated with each other, compared to static expression data which samples are assumed as independent individuals. In this chapter, we demonstrate how HMMs and hierarchical Bayesian modeling methods capture the horizontal time dependency structures in time series expression profiles by focusing on the identification of differential expression. In addition, those differential expression genes and transcript variant isoforms over time detected in core prerequisite steps can be generally further applied in detection of genetic regulatory networks to comprehensively uncover dynamic repertoires in the aspects of system biology as the coupled framework.
A spatial scan statistic for multiple clusters.
Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie
2011-10-01
Spatial scan statistics are commonly used for geographical disease surveillance and cluster detection. While there are multiple clusters coexisting in the study area, they become difficult to detect because of clusters' shadowing effect to each other. The recently proposed sequential method showed its better power for detecting the second weaker cluster, but did not improve the ability of detecting the first stronger cluster which is more important than the second one. We propose a new extension of the spatial scan statistic which could be used to detect multiple clusters. Through constructing two or more clusters in the alternative hypothesis, our proposed method accounts for other coexisting clusters in the detecting and evaluating process. The performance of the proposed method is compared to the sequential method through an intensive simulation study, in which our proposed method shows better power in terms of both rejecting the null hypothesis and accurately detecting the coexisting clusters. In the real study of hand-foot-mouth disease data in Pingdu city, a true cluster town is successfully detected by our proposed method, which cannot be evaluated to be statistically significant by the standard method due to another cluster's shadowing effect. Copyright © 2011 Elsevier Inc. All rights reserved.
A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.
Houseman, E Andres; Virji, M Abbas
2017-08-01
Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates were significant in some frequentist models, but in the Bayesian model their credible intervals contained zero; such discrepancies were observed in multiple datasets. Variance components from the Bayesian model reflected substantial autocorrelation, consistent with the frequentist models, except for the auto-regressive moving average model. Plots of means from the Bayesian model showed good fit to the observed data. The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Partitioning Ocean Wave Spectra Obtained from Radar Observations
NASA Astrophysics Data System (ADS)
Delaye, Lauriane; Vergely, Jean-Luc; Hauser, Daniele; Guitton, Gilles; Mouche, Alexis; Tison, Celine
2016-08-01
2D wave spectra of ocean waves can be partitioned into several wave components to better characterize the scene. We present here two methods of component detection: one based on watershed algorithm and the other based on a Bayesian approach. We tested both methods on a set of simulated SWIM data, the Ku-band real aperture radar embarked on the CFOSAT (China- France Oceanography Satellite) mission which launch is planned mid-2018. We present the results and the limits of both approaches and show that Bayesian method can also be applied to other kind of wave spectra observations as those obtained with the radar KuROS, an airborne radar wave spectrometer.
Automated detection of changes in sequential color ocular fundus images
NASA Astrophysics Data System (ADS)
Sakuma, Satoshi; Nakanishi, Tadashi; Takahashi, Yasuko; Fujino, Yuichi; Tsubouchi, Tetsuro; Nakanishi, Norimasa
1998-06-01
A recent trend is the automatic screening of color ocular fundus images. The examination of such images is used in the early detection of several adult diseases such as hypertension and diabetes. Since this type of examination is easier than CT, costs less, and has no harmful side effects, it will become a routine medical examination. Normal ocular fundus images are found in more than 90% of all people. To deal with the increasing number of such images, this paper proposes a new approach to process them automatically and accurately. Our approach, based on individual comparison, identifies changes in sequential images: a previously diagnosed normal reference image is compared to a non- diagnosed image.
Sequential detection of web defects
Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.
2001-01-01
A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.
NDMA TREATMENT BY SEQUENTIAL GAC ADSORPTION AND FENTON DRIVEN DESTRUCTION
N-nitrosodimethylamine (NDMA) is a highly toxic microcontaminant that was first detected in groundwater tainted by rocket fuel manufacturing wastes. More recently NDMA has been detected as a by-product of other industrial processes including the chlorination of treated wastewater...
NASA Astrophysics Data System (ADS)
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Expert system for online surveillance of nuclear reactor coolant pumps
Gross, Kenny C.; Singer, Ralph M.; Humenik, Keith E.
1993-01-01
An expert system for online surveillance of nuclear reactor coolant pumps. This system provides a means for early detection of pump or sensor degradation. Degradation is determined through the use of a statistical analysis technique, sequential probability ratio test, applied to information from several sensors which are responsive to differing physical parameters. The results of sequential testing of the data provide the operator with an early warning of possible sensor or pump failure.
Rise and fall of political complexity in island South-East Asia and the Pacific.
Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth
2010-10-14
There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.
Visual perception as retrospective Bayesian decoding from high- to low-level features.
Ding, Stephanie; Cueva, Christopher J; Tsodyks, Misha; Qian, Ning
2017-10-24
When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. Published under the PNAS license.
Conesa, D; Martínez-Beneito, M A; Amorós, R; López-Quílez, A
2015-04-01
Considerable effort has been devoted to the development of statistical algorithms for the automated monitoring of influenza surveillance data. In this article, we introduce a framework of models for the early detection of the onset of an influenza epidemic which is applicable to different kinds of surveillance data. In particular, the process of the observed cases is modelled via a Bayesian Hierarchical Poisson model in which the intensity parameter is a function of the incidence rate. The key point is to consider this incidence rate as a normal distribution in which both parameters (mean and variance) are modelled differently, depending on whether the system is in an epidemic or non-epidemic phase. To do so, we propose a hidden Markov model in which the transition between both phases is modelled as a function of the epidemic state of the previous week. Different options for modelling the rates are described, including the option of modelling the mean at each phase as autoregressive processes of order 0, 1 or 2. Bayesian inference is carried out to provide the probability of being in an epidemic state at any given moment. The methodology is applied to various influenza data sets. The results indicate that our methods outperform previous approaches in terms of sensitivity, specificity and timeliness. © The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Quantile regression and Bayesian cluster detection to identify radon prone areas.
Sarra, Annalina; Fontanella, Lara; Valentini, Pasquale; Palermi, Sergio
2016-11-01
Albeit the dominant source of radon in indoor environments is the geology of the territory, many studies have demonstrated that indoor radon concentrations also depend on dwelling-specific characteristics. Following a stepwise analysis, in this study we propose a combined approach to delineate radon prone areas. We first investigate the impact of various building covariates on indoor radon concentrations. To achieve a more complete picture of this association, we exploit the flexible formulation of a Bayesian spatial quantile regression, which is also equipped with parameters that controls the spatial dependence across data. The quantitative knowledge of the influence of each significant building-specific factor on the measured radon levels is employed to predict the radon concentrations that would have been found if the sampled buildings had possessed standard characteristics. Those normalised radon measures should reflect the geogenic radon potential of the underlying ground, which is a quantity directly related to the geological environment. The second stage of the analysis is aimed at identifying radon prone areas, and to this end, we adopt a Bayesian model for spatial cluster detection using as reference unit the building with standard characteristics. The case study is based on a data set of more than 2000 indoor radon measures, available for the Abruzzo region (Central Italy) and collected by the Agency of Environmental Protection of Abruzzo, during several indoor radon monitoring surveys. Copyright © 2016 Elsevier Ltd. All rights reserved.
Takayanagi, Toshio; Inaba, Yuya; Kanzaki, Hiroyuki; Jyoichi, Yasutaka; Motomizu, Shoji
2009-09-15
Catalytic effect of metal ions on luminol chemiluminescence (CL) was investigated by sequential injection analysis (SIA). The SIA system was set up with two solenoid micropumps, an eight-port selection valve, and a photosensor module with a fountain-type chemiluminescence cell. The SIA system was controlled and the CL signals were collected by a LabVIEW program. Aqueous solutions of luminol, H(2)O(2), and a sample solution containing metal ion were sequentially aspirated to the holding coil, and the zones were immediately propelled to the detection cell. After optimizing the parameters using 1 x 10(-5)M Fe(3+) solution, catalytic effect of some metal species was compared. Among 16 metal species examined, relatively strong CL responses were obtained with Fe(3+), Fe(2+), VO(2+), VO(3)(-), MnO(4)(-), Co(2+), and Cu(2+). The limits of detection by the present SIA system were comparable to FIA systems. Permanganate ion showed the highest CL sensitivity among the metal species examined; the calibration graph for MnO(4)(-) was linear at the concentration level of 10(-8)M and the limit of detection for MnO(4)(-) was 4.0 x 10(-10)M (S/N=3).
Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M
2015-01-01
Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.
A reverse order interview does not aid deception detection regarding intentions
Fenn, Elise; McGuire, Mollie; Langben, Sara; Blandón-Gitlin, Iris
2015-01-01
Promising recent research suggests that more cognitively demanding interviews improve deception detection accuracy. Would these cognitively demanding techniques work in the same way when discriminating between true and false future intentions? In Experiment 1 participants planned to complete a task, but instead were intercepted and interviewed about their intentions. Participants lied or told the truth, and were subjected to high (reverse order) or low (sequential order) cognitive load interviews. Third-party observers watched these interviews and indicated whether they thought the person was lying or telling the truth. Subjecting participants to a reverse compared to sequential interview increased the misidentification rate and the appearance of cognitive load in truth tellers. People lying about false intentions were not better identified. In Experiment 2, a second set of third-party observers rated behavioral cues. Consistent with Experiment 1, truth tellers, but not liars, exhibited more behaviors associated with lying and fewer behaviors associated with truth telling in the reverse than sequential interview. Together these results suggest that certain cognitively demanding interviews may be less useful when interviewing to detect false intentions. Explaining a true intention while under higher cognitive demand places truth tellers at risk of being misclassified. There may be such a thing as too much cognitive load induced by certain techniques PMID:26379610
Ponce de León, Claudia A; DeNicola, Katie; Montes Bayón, Maria; Caruso, Joseph A
2003-06-01
Different techniques have been employed in order to evaluate the most efficient procedure for the extraction of selenium from soil as required for speciation. Selenium contaminated sediments from Stewart Lake Wetland, California were used. A strong acid mineralization of the samples gives quantitative total selenium, which is then used to estimate recoveries for the milder extraction methods. The different extraction methodologies involve the sequential use of water, buffer (phosphate, pH 7) and either acid solution (e.g. HNO3 or HCl) or basic solutions (e.g. ammonium acetate, NaOH or TMAH). Pyrophosphate extraction was also evaluated and showed that selenium was not associated with humic acids. The extractants were subsequently analyzed by size exclusion chromatography (SEC) with UV (254 and 400 nm) and on-line ICP-MS detection; anion exchange chromatography, and ion-pair reversed phase chromatography with ICP-MS detection. For sequential extractions the extraction efficiencies showed that the basic extractions were more efficient than the acidic. The difference between the acidic and the basic extraction efficiency is carried to the sulfite extraction, suggesting that whatever is not extracted by the acid is subsequently extracted by the sulfite. The species identified with the different chromatographies were selenate, selenite, elemental selenium and some organic selenium.
A reverse order interview does not aid deception detection regarding intentions.
Fenn, Elise; McGuire, Mollie; Langben, Sara; Blandón-Gitlin, Iris
2015-01-01
Promising recent research suggests that more cognitively demanding interviews improve deception detection accuracy. Would these cognitively demanding techniques work in the same way when discriminating between true and false future intentions? In Experiment 1 participants planned to complete a task, but instead were intercepted and interviewed about their intentions. Participants lied or told the truth, and were subjected to high (reverse order) or low (sequential order) cognitive load interviews. Third-party observers watched these interviews and indicated whether they thought the person was lying or telling the truth. Subjecting participants to a reverse compared to sequential interview increased the misidentification rate and the appearance of cognitive load in truth tellers. People lying about false intentions were not better identified. In Experiment 2, a second set of third-party observers rated behavioral cues. Consistent with Experiment 1, truth tellers, but not liars, exhibited more behaviors associated with lying and fewer behaviors associated with truth telling in the reverse than sequential interview. Together these results suggest that certain cognitively demanding interviews may be less useful when interviewing to detect false intentions. Explaining a true intention while under higher cognitive demand places truth tellers at risk of being misclassified. There may be such a thing as too much cognitive load induced by certain techniques.
Ultrafast current imaging by Bayesian inversion
Somnath, Suhas; Law, Kody J. H.; Morozovska, Anna; Maksymovych, Petro; Kim, Yunseok; Lu, Xiaoli; Alexe, Marin; Archibald, Richard K; Kalinin, Sergei V; Jesse, Stephen; Vasudevan, Rama K
2016-01-01
Spectroscopic measurements of current-voltage curves in scanning probe microscopy is the earliest and one of the most common methods for characterizing local energy-dependent electronic properties, providing insight into superconductive, semiconductor, and memristive behaviors. However, the quasistatic nature of these measurements renders them extremely slow. Here, we demonstrate a fundamentally new approach for dynamic spectroscopic current imaging via full information capture and Bayesian inference analysis. This "general-mode I-V"method allows three orders of magnitude faster rates than presently possible. The technique is demonstrated by acquiring I-V curves in ferroelectric nanocapacitors, yielding >100,000 I-V curves in <20 minutes. This allows detection of switching currents in the nanoscale capacitors, as well as determination of dielectric constant. These experiments show the potential for the use of full information capture and Bayesian inference towards extracting physics from rapid I-V measurements, and can be used for transport measurements in both atomic force and scanning tunneling microscopy. The data was analyzed using pycroscopy - an open-source python package available at https://github.com/pycroscopy/pycroscopy
Real-time prediction of acute cardiovascular events using hardware-implemented Bayesian networks.
Tylman, Wojciech; Waszyrowski, Tomasz; Napieralski, Andrzej; Kamiński, Marek; Trafidło, Tamara; Kulesza, Zbigniew; Kotas, Rafał; Marciniak, Paweł; Tomala, Radosław; Wenerski, Maciej
2016-02-01
This paper presents a decision support system that aims to estimate a patient׳s general condition and detect situations which pose an immediate danger to the patient׳s health or life. The use of this system might be especially important in places such as accident and emergency departments or admission wards, where a small medical team has to take care of many patients in various general conditions. Particular stress is laid on cardiovascular and pulmonary conditions, including those leading to sudden cardiac arrest. The proposed system is a stand-alone microprocessor-based device that works in conjunction with a standard vital signs monitor, which provides input signals such as temperature, blood pressure, pulseoxymetry, ECG, and ICG. The signals are preprocessed and analysed by a set of artificial intelligence algorithms, the core of which is based on Bayesian networks. The paper focuses on the construction and evaluation of the Bayesian network, both its structure and numerical specification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Network structure exploration in networks with node attributes
NASA Astrophysics Data System (ADS)
Chen, Yi; Wang, Xiaolong; Bu, Junzhao; Tang, Buzhou; Xiang, Xin
2016-05-01
Complex networks provide a powerful way to represent complex systems and have been widely studied during the past several years. One of the most important tasks of network analysis is to detect structures (also called structural regularities) embedded in networks by determining group number and group partition. Most of network structure exploration models only consider network links. However, in real world networks, nodes may have attributes that are useful for network structure exploration. In this paper, we propose a novel Bayesian nonparametric (BNP) model to explore structural regularities in networks with node attributes, called Bayesian nonparametric attribute (BNPA) model. This model does not only take full advantage of both links between nodes and node attributes for group partition via shared hidden variables, but also determine group number automatically via the Bayesian nonparametric theory. Experiments conducted on a number of real and synthetic networks show that our BNPA model is able to automatically explore structural regularities in networks with node attributes and is competitive with other state-of-the-art models.
Towards a Bayesian evaluation of features in questioned handwritten signatures.
Gaborini, Lorenzo; Biedermann, Alex; Taroni, Franco
2017-05-01
In this work, we propose the construction of a evaluative framework for supporting experts in questioned signature examinations. Through the use of Bayesian networks, we envision to quantify the probative value of well defined measurements performed on questioned signatures, in a way that is both formalised and part of a coherent approach to evaluation. At the current stage, our project is explorative, focusing on the broad range of aspects that relate to comparative signature examinations. The goal is to identify writing features which are both highly discriminant, and easy for forensic examiners to detect. We also seek for a balance between case-specific features and characteristics which can be measured in the vast majority of signatures. Care is also taken at preserving the interpretability at every step of the reasoning process. This paves the way for future work, which will aim at merging the different contributions to a single probabilistic measure of strength of evidence using Bayesian networks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.
On resilience studies of system detection and recovery techniques against stealthy insider attacks
NASA Astrophysics Data System (ADS)
Wei, Sixiao; Zhang, Hanlin; Chen, Genshe; Shen, Dan; Yu, Wei; Pham, Khanh D.; Blasch, Erik P.; Cruz, Jose B.
2016-05-01
With the explosive growth of network technologies, insider attacks have become a major concern to business operations that largely rely on computer networks. To better detect insider attacks that marginally manipulate network traffic over time, and to recover the system from attacks, in this paper we implement a temporal-based detection scheme using the sequential hypothesis testing technique. Two hypothetical states are considered: the null hypothesis that the collected information is from benign historical traffic and the alternative hypothesis that the network is under attack. The objective of such a detection scheme is to recognize the change within the shortest time by comparing the two defined hypotheses. In addition, once the attack is detected, a server migration-based system recovery scheme can be triggered to recover the system to the state prior to the attack. To understand mitigation of insider attacks, a multi-functional web display of the detection analysis was developed for real-time analytic. Experiments using real-world traffic traces evaluate the effectiveness of Detection System and Recovery (DeSyAR) scheme. The evaluation data validates the detection scheme based on sequential hypothesis testing and the server migration-based system recovery scheme can perform well in effectively detecting insider attacks and recovering the system under attack.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
Textual and visual content-based anti-phishing: a Bayesian approach.
Zhang, Haijun; Liu, Gang; Chow, Tommy W S; Liu, Wenyin
2011-10-01
A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases. © 2011 IEEE
Validation of antibiotic residue tests for dairy goats.
Zeng, S S; Hart, S; Escobar, E N; Tesfai, K
1998-03-01
The SNAP test, LacTek test (B-L and CEF), Charm Bacillus sterothermophilus var. calidolactis disk assay (BsDA), and Charm II Tablet Beta-lactam sequential test were validated using antibiotic-fortified and -incurred goat milk following the protocol for test kit validations of the U.S. Food and Drug Administration Center for Veterinary Medicine. SNAP, Charm BsDA, and Charm II Tablet Sequential tests were sensitive and reliable in detecting antibiotic residues in goat milk. All three assays showed greater than 90% sensitivity and specificity at tolerance and detection levels. However, caution should be taken in interpreting test results at detection levels. Because of the high sensitivity of these three tests, false-violative results could be obtained in goat milk containing antibiotic residues below the tolerance level. Goat milk testing positive by these tests must be confirmed using a more sophisticated methodology, such as high-performance liquid chromatography, before the milk is condemned. LacTek B-L test did not detect several antibiotics, including penicillin G, in goat milk at tolerance levels. However, LacTek CEF was excellent in detecting ceftiofur residue in goat milk.
Sparse representation and Bayesian detection of genome copy number alterations from microarray data.
Pique-Regi, Roger; Monso-Varona, Jordi; Ortega, Antonio; Seeger, Robert C; Triche, Timothy J; Asgharzadeh, Shahab
2008-02-01
Genomic instability in cancer leads to abnormal genome copy number alterations (CNA) that are associated with the development and behavior of tumors. Advances in microarray technology have allowed for greater resolution in detection of DNA copy number changes (amplifications or deletions) across the genome. However, the increase in number of measured signals and accompanying noise from the array probes present a challenge in accurate and fast identification of breakpoints that define CNA. This article proposes a novel detection technique that exploits the use of piece wise constant (PWC) vectors to represent genome copy number and sparse Bayesian learning (SBL) to detect CNA breakpoints. First, a compact linear algebra representation for the genome copy number is developed from normalized probe intensities. Second, SBL is applied and optimized to infer locations where copy number changes occur. Third, a backward elimination (BE) procedure is used to rank the inferred breakpoints; and a cut-off point can be efficiently adjusted in this procedure to control for the false discovery rate (FDR). The performance of our algorithm is evaluated using simulated and real genome datasets and compared to other existing techniques. Our approach achieves the highest accuracy and lowest FDR while improving computational speed by several orders of magnitude. The proposed algorithm has been developed into a free standing software application (GADA, Genome Alteration Detection Algorithm). http://biron.usc.edu/~piquereg/GADA
Supervised Time Series Event Detector for Building Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-04-13
A machine learning based approach is developed to detect events that have rarely been seen in the historical data. The data can include building energy consumption, sensor data, environmental data and any data that may affect the building's energy consumption. The algorithm is a modified nonlinear Bayesian support vector machine, which examines daily energy consumption profile, detect the days with abnormal events, and diagnose the cause of the events.
ERIC Educational Resources Information Center
McLeod, Lori D.; Lewis, Charles; Thissen, David.
With the increased use of computerized adaptive testing, which allows for continuous testing, new concerns about test security have evolved, one being the assurance that items in an item pool are safeguarded from theft. In this paper, the risk of score inflation and procedures to detect test takers using item preknowledge are explored. When test…
A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays
NASA Astrophysics Data System (ADS)
Guo, Y. J.; Lee, K. J.; Caballero, R. N.
2018-04-01
The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.
Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes
Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro
2016-01-01
In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal “invariant features” is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a “change map”, which can be accomplished by means of the CDI’s informational content. For this purpose, information metrics such as the Shannon Entropy and “Specific Information” have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf’s) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances. PMID:27706048
Dynamical inference: where phase synchronization and generalized synchronization meet.
Stankovski, Tomislav; McClintock, Peter V E; Stefanovska, Aneta
2014-06-01
Synchronization is a widespread phenomenon that occurs among interacting oscillatory systems. It facilitates their temporal coordination and can lead to the emergence of spontaneous order. The detection of synchronization from the time series of such systems is of great importance for the understanding and prediction of their dynamics, and several methods for doing so have been introduced. However, the common case where the interacting systems have time-variable characteristic frequencies and coupling parameters, and may also be subject to continuous external perturbation and noise, still presents a major challenge. Here we apply recent developments in dynamical Bayesian inference to tackle these problems. In particular, we discuss how to detect phase slips and the existence of deterministic coupling from measured data, and we unify the concepts of phase synchronization and general synchronization. Starting from phase or state observables, we present methods for the detection of both phase and generalized synchronization. The consistency and equivalence of phase and generalized synchronization are further demonstrated, by the analysis of time series from analog electronic simulations of coupled nonautonomous van der Pol oscillators. We demonstrate that the detection methods work equally well on numerically simulated chaotic systems. In all the cases considered, we show that dynamical Bayesian inference can clearly identify noise-induced phase slips and distinguish coherence from intrinsic coupling-induced synchronization.
NASA Astrophysics Data System (ADS)
Li, Shizhe; Zhang, Yan; Ferraris Araneta, Maria; Xiang, Yun; Johnson, Christopher; Innis, Robert B.; Shen, Jun
2012-05-01
This study demonstrates the feasibility of simultaneously detecting human brain metabolites labeled by two substrates infused in a sequential order. In vivo 13C spectra of carboxylic/amide carbons were acquired only during the infusion of the second substrate. This approach allowed dynamic detection of 13C labeling from two substrates with considerably different labeling patterns. [2-13C]glucose and [U-13C6]glucose were used to generate singlet and doublet signals of the same carboxylic/amide carbon atom, respectively. Because of the large one-bond 13C-13C homonuclear J coupling between a carboxylic/amide carbon and an aliphatic carbon (˜50 Hz), the singlet and doublet signals of the same carboxylic/amide carbon were well distinguished. The results demonstrated that different 13C isotopomer patterns could be simultaneously and distinctly measured in vivo in a clinical setting at 3 T.
NASA Astrophysics Data System (ADS)
Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.
Correlating new detections back to a large catalog of resident space objects (RSOs) requires solving one of three types of data association problems: observation-to-track, track-to-track, or observation-to-observation. The authors previous work has explored the use of various information divergence metrics for solving these problems: Kullback-Leibler (KL) divergence, mutual information, and Bhattacharrya distance. In addition to approaching the data association problem strictly from the metric tracking aspect, we have explored fusing metric and photometric data using Bayesian probabilistic reasoning for RSO identification to aid in our ability to correlate data to specific RS Os. In this work, we will focus our attention on the KL Divergence, which is a measure of the information gained when new evidence causes the observer to revise their beliefs. We can apply the Principle of Minimum Discrimination Information such that new data produces as small an information gain as possible and this information change is bounded by ɛ. Choosing an appropriate value for ɛ for both convergence and change detection is a function of your risk tolerance. Small ɛ for change detection increases alarm rates while larger ɛ for convergence means that new evidence need not be identical in information content. We need to understand what this change detection metric implies for Type I α and Type II β errors when we are forced to make a decision on whether new evidence represents a true change in characterization of an object or is merely within the bounds of our measurement uncertainty. This is unclear for the case of fusing multiple kinds and qualities of characterization evidence that may exist in different metric spaces or are even semantic statements. To this end, we explore the use of Sequential Probability Ratio Testing where we suppose that we may need to collect additional evidence before accepting or rejecting the null hypothesis that a change has occurred. In this work, we will explore the effects of choosing ɛ as a function of α and β. Our intent is that this work will help bridge understanding between the well-trodden grounds of Type I and Type II errors and changes in information theoretic content.
Calcagni, Maria Lucia; Taralli, Silvia; Cardillo, Giuseppe; Graziano, Paolo; Ialongo, Pasquale; Mattoli, Maria Vittoria; Di Franco, Davide; Caldarella, Carmelo; Carleo, Francesco; Indovina, Luca; Giordano, Alessandro
2016-04-01
Solitary pulmonary nodule (SPN) still represents a diagnostic challenge. The aim of our study was to evaluate the diagnostic performance of (18)F-fluorodeoxyglucose positron emission tomography-computed tomography in one of the largest samples of small SPNs, incidentally detected in subjects without a history of malignancy (nonscreening population) and undetermined at computed tomography. One-hundred and sixty-two small (>0.8 to 1.5 cm) and, for comparison, 206 large nodules (>1.5 to 3 cm) were retrospectively evaluated. Diagnostic performance of (18)F-fluorodeoxyglucose visual analysis, receiver-operating characteristic (ROC) analysis for maximum standardized uptake value (SUVmax), and Bayesian analysis were assessed using histology or radiological follow-up as a golden standard. In 162 small nodules, (18)F-fluorodeoxyglucose visual and ROC analyses (SUVmax = 1.3) provided 72.6% and 77.4% sensitivity and 88.0% and 82.0% specificity, respectively. The prevalence of malignancy was 38%; Bayesian analysis provided 78.8% positive and 16.0% negative posttest probabilities of malignancy. In 206 large nodules (18)F-fluorodeoxyglucose visual and ROC analyses (SUVmax = 1.9) provided 89.5% and 85.1% sensitivity and 70.8% and 79.2% specificity, respectively. The prevalence of malignancy was 65%; Bayesian analysis provided 85.0% positive and 21.6% negative posttest probabilities of malignancy. In both groups, malignant nodules had a significant higher SUVmax (p < 0.0001) than benign nodules. Only in the small group, malignant nodules were significantly larger (p = 0.0054) than benign ones. (18)F-fluorodeoxyglucose can be clinically relevant to rule in and rule out malignancy in undetermined small SPNs, incidentally detected in nonscreening population with intermediate pretest probability of malignancy, as well as in larger ones. Visual analysis can be considered an optimal diagnostic criterion, adequately detecting a wide range of malignant nodules with different metabolic activity. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
A probabilistic method to diagnose faults of air handling units
NASA Astrophysics Data System (ADS)
Dey, Debashis
Air handling unit (AHU) is one of the most extensively used equipment in large commercial buildings. This device is typically customized and lacks quality system integration which can result in hardwire failures and controller errors. Air handling unit Performance Assessment Rules (APAR) is a fault detection tool that uses a set of expert rules derived from mass and energy balances to detect faults in air handling units. APAR is computationally simple enough that it can be embedded in commercial building automation and control systems and relies only upon sensor data and control signals that are commonly available in these systems. Although APAR has many advantages over other methods, for example no training data required and easy to implement commercially, most of the time it is unable to provide the diagnosis of the faults. For instance, a fault on temperature sensor could be fixed bias, drifting bias, inappropriate location, complete failure. Also a fault in mixing box can be return and outdoor damper leak or stuck. In addition, when multiple rules are satisfied the list of faults increases. There is no proper way to have the correct diagnosis for rule based fault detection system. To overcome this limitation we proposed Bayesian Belief Network (BBN) as a diagnostic tool. BBN can be used to simulate diagnostic thinking of FDD experts through a probabilistic way. In this study we developed a new way to detect and diagnose faults in AHU through combining APAR rules and Bayesian Belief network. Bayesian Belief Network is used as a decision support tool for rule based expert system. BBN is highly capable to prioritize faults when multiple rules are satisfied simultaneously. Also it can get information from previous AHU operating conditions and maintenance records to provide proper diagnosis. The proposed model is validated with real time measured data of a campus building at University of Texas at San Antonio (UTSA).The results show that BBN is correctly able to prioritize faults which can be verified by manual investigation.
NASA Astrophysics Data System (ADS)
Rope, R. C.; Ames, D. P.; Jerry, T. D.; Cherry, S. J.
2005-12-01
Invasive plant species, such as Bromus tectorum (cheatgrass), cost the United States over $36 billion per year and have encroached upon over 100 million acres while impacting range site productivity, disturbing wildlife habitat, altering the wildland fire regime and frequencies, and reducing biodiversity. Because of these adverse impacts, federal, tribal, state, and county land managers are faced with the challenge of prevention, early detection, management, and monitoring of invasive plants. Often these managers rely on the analysis of remotely sensed imagery as part of their management plan. However, it's difficult to predict specific phenological events that allow for the spectral discrimination of invasive species using only remotely sensed imagery. To address this issue tools are being developed to model and view optimal periods to collect high spatial and/or spectral resolution remotely sensed data for refined detection and mapping of invasive species and for use as a decision support tool for land managers. These tools involve the integration of historic and current climate data (cumulative growing days and precipitation) satellite imagery (MODIS) and Bayesian Belief Networks, and a web ArcIMS application to distribute the information. The general approach is to issue an initial forecast early in the year based on the previous years' data. As the year progresses, air temperature, precipitation and newly acquired low resolution MODIS satellite imagery will be used to update the prediction. Updating will be accomplished using a Bayesian Belief Network model that indicates the probabilistic relationships between prior years' conditions and those of the current year. These tools have specific application in providing a means for which land managers can efficiently and effectively detect, map, and monitor invasive plant species, specifically cheatgrass, in western rangelands. This information can then be integrated into management studies and plans to help land managers more accurately and completely determine areas infested with cheatgrass to aid in their eradication practices and future management plans.
Three treatment media, used for the removal of arsenic from drinking water, were sequentially extracted using 10mM MgCl2 (pH 8), 10mM NaH2PO4 (pH 7) followed by 10mM (NH4)2C2O4 (pH 3). The media were extracted using an on-line automated continuous extraction system which allowed...
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-06-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.
Localization of Mobile Robots Using Odometry and an External Vision Sensor
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318
Localization of mobile robots using odometry and an external vision sensor.
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.
Onorante, Luca; Raftery, Adrian E
2016-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*
Onorante, Luca; Raftery, Adrian E.
2015-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859
Remembrance of inferences past: Amortization in human hypothesis generation.
Dasgupta, Ishita; Schulz, Eric; Goodman, Noah D; Gershman, Samuel J
2018-05-21
Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain's limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants' responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference. Copyright © 2018 Elsevier B.V. All rights reserved.
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
Bayesian microsaccade detection
Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji
2017-01-01
Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483
Dark sequential Z ' portal: Collider and direct detection experiments
NASA Astrophysics Data System (ADS)
Arcadi, Giorgio; Campos, Miguel D.; Lindner, Manfred; Masiero, Antonio; Queiroz, Farinaldo S.
2018-02-01
We revisit the status of a Majorana fermion as a dark matter candidate when a sequential Z' gauge boson dictates the dark matter phenomenology. Direct dark matter detection signatures rise from dark matter-nucleus scatterings at bubble chamber and liquid xenon detectors, and from the flux of neutrinos from the Sun measured by the IceCube experiment, which is governed by the spin-dependent dark matter-nucleus scattering. On the collider side, LHC searches for dilepton and monojet + missing energy signals play an important role. The relic density and perturbativity requirements are also addressed. By exploiting the dark matter complementarity we outline the region of parameter space where one can successfully have a Majorana dark matter particle in light of current and planned experimental sensitivities.
NASA Astrophysics Data System (ADS)
Hargrave, C.; Moores, M.; Deegan, T.; Gibbs, A.; Poulsen, M.; Harden, F.; Mengersen, K.
2014-03-01
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific subregions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Gorelick, Steven M.
2005-03-01
We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spallicci, Alessandro D. A. M., E-mail: spallicci@cnrs-orleans.fr
2013-02-20
Gravitational waves coming from supermassive black hole binaries (SMBHBs) are targeted by both the Pulsar Timing Array (PTA) and Space Laser Interferometry (SLI). The possibility of a single SMBHB being tracked first by PTA, through inspiral, and later by SLI, up to merger and ring-down, has been previously suggested. Although the bounding parameters are drawn by the current PTA or the upcoming Square Kilometer Array (SKA), and by the New Gravitational Observatory (NGO), derived from the Laser Interferometer Space Antenna (LISA), this paper also addresses sequential detection beyond specific project constraints. We consider PTA-SKA, which is sensitive from 10{sup -9}more » to p Multiplication-Sign 10{sup -7} Hz (p = 4, 8), and SLI, which operates from s Multiplication-Sign 10{sup -5} up to 1 Hz (s = 1, 3). An SMBHB in the range of 2 Multiplication-Sign 10{sup 8}-2 Multiplication-Sign 10{sup 9} M {sub Sun} (the masses are normalized to a (1 + z) factor, the redshift lying between z = 0.2 and z = 1.5) moves from the PTA-SKA to the SLI band over a period ranging from two months to fifty years. By combining three supermassive black hole (SMBH)-host relations with three accretion prescriptions, nine astrophysical scenarios are formed. They are then related to three levels of pulsar timing residuals (50, 5, 1 ns), generating 27 cases. For residuals of 1 ns, sequential detection probability will never be better than 4.7 Multiplication-Sign 10{sup -4} yr{sup -2} or 3.3 Multiplication-Sign 10{sup -6} yr{sup -2} (per year to merger and per year of survey), according to the best and worst astrophysical scenarios, respectively; put differently this means one sequential detection every 46 or 550 years for an equivalent maximum time to merger and duration of the survey. The chances of sequential detection are further reduced by increasing values of the s parameter (they vanish for s = 10) and of the SLI noise, and by decreasing values of the remnant spin. The spread in the predictions diminishes when timing precision is improved or the SLI low-frequency cutoff is lowered. So while transit times and the SLI signal-to-noise ratio (S/N) may be adequate, the likelihood of sequential detection is severely hampered by the current estimates on the number-just a handful-of individual inspirals observable by PTA-SKA, and to a lesser extent by the wide gap between the pulsar timing and space interferometry bands, and by the severe requirements on pulsar timing residuals. Optimization of future operational scenarios for SKA and SLI is briefly dealt with, since a detection of even a single event would be of paramount importance for the understanding of SMBHBs and of the astrophysical processes connected to their formation and evolution.« less
Analysis of multinomial models with unknown index using data augmentation
Royle, J. Andrew; Dorazio, R.M.; Link, W.A.
2007-01-01
Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.
A novel approach to segmentation and measurement of medical image using level set methods.
Chen, Yao-Tien
2017-06-01
The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
An, Lihua; Fung, Karen Y; Krewski, Daniel
2010-09-01
Spontaneous adverse event reporting systems are widely used to identify adverse reactions to drugs following their introduction into the marketplace. In this article, a James-Stein type shrinkage estimation strategy was developed in a Bayesian logistic regression model to analyze pharmacovigilance data. This method is effective in detecting signals as it combines information and borrows strength across medically related adverse events. Computer simulation demonstrated that the shrinkage estimator is uniformly better than the maximum likelihood estimator in terms of mean squared error. This method was used to investigate the possible association of a series of diabetic drugs and the risk of cardiovascular events using data from the Canada Vigilance Online Database.
Phylogenetic Analyses: A Toolbox Expanding towards Bayesian Methods
Aris-Brosou, Stéphane; Xia, Xuhua
2008-01-01
The reconstruction of phylogenies is becoming an increasingly simple activity. This is mainly due to two reasons: the democratization of computing power and the increased availability of sophisticated yet user-friendly software. This review describes some of the latest additions to the phylogenetic toolbox, along with some of their theoretical and practical limitations. It is shown that Bayesian methods are under heavy development, as they offer the possibility to solve a number of long-standing issues and to integrate several steps of the phylogenetic analyses into a single framework. Specific topics include not only phylogenetic reconstruction, but also the comparison of phylogenies, the detection of adaptive evolution, and the estimation of divergence times between species. PMID:18483574
Nichols, J.M.; Link, W.A.; Murphy, K.D.; Olson, C.C.
2010-01-01
This work discusses a Bayesian approach to approximating the distribution of parameters governing nonlinear structural systems. Specifically, we use a Markov Chain Monte Carlo method for sampling the posterior parameter distributions thus producing both point and interval estimates for parameters. The method is first used to identify both linear and nonlinear parameters in a multiple degree-of-freedom structural systems using free-decay vibrations. The approach is then applied to the problem of identifying the location, size, and depth of delamination in a model composite beam. The influence of additive Gaussian noise on the response data is explored with respect to the quality of the resulting parameter estimates.
Reliability of a Bayesian network to predict an elevated aldosterone-to-renin ratio.
Ducher, Michel; Mounier-Véhier, Claire; Lantelme, Pierre; Vaisse, Bernard; Baguet, Jean-Philippe; Fauvel, Jean-Pierre
2015-05-01
Resistant hypertension is common, mainly idiopathic, but sometimes related to primary aldosteronism. Thus, most hypertension specialists recommend screening for primary aldosteronism. To optimize the selection of patients whose aldosterone-to-renin ratio (ARR) is elevated from simple clinical and biological characteristics. Data from consecutive patients referred between 1 June 2008 and 30 May 2009 were collected retrospectively from five French 'European excellence hypertension centres' institutional registers. Patients were included if they had at least one of: onset of hypertension before age 40 years, resistant hypertension, history of hypokalaemia, efficient treatment by spironolactone, and potassium supplementation. An ARR>32 ng/L and aldosterone>160 ng/L in patients treated without agents altering the renin-angiotensin system was considered as elevated. Bayesian network and stepwise logistic regression were used to predict an elevated ARR. Of 334 patients, 89 were excluded (31 for incomplete data, 32 for taking agents that alter the renin-angiotensin system and 26 for other reasons). Among 245 included patients, 110 had an elevated ARR. Sensitivity reached 100% or 63.3% using Bayesian network or logistic regression, respectively, and specificity reached 89.6% or 67.2%, respectively. The area under the receiver-operating-characteristic curve obtained with the Bayesian network was significantly higher than that obtained by stepwise regression (0.93±0.02 vs. 0.70±0.03; P<0.001). In hypertension centres, Bayesian network efficiently detected patients with an elevated ARR. An external validation study is required before use in primary clinical settings. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S
2013-05-01
This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.
Multi-Detection Events, Probability Density Functions, and Reduced Location Area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Schrom, Brian T.
2016-03-01
Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.
Too Cool for Stellar Rules: A Bayesian Exploration of Trends in Ultracool Magnetism
NASA Astrophysics Data System (ADS)
Cruz, Kelle L.; Schwab, Ellianna; Williams, Peter K. G.; Hogg, David W.; Rodriguez, David R.; BDNYC
2017-01-01
Ultracool dwarfs, the lowest mass red dwarfs and brown dwarfs (spectral types M7-Y9), are fully convective objects with electrically neutral atmospheres due to their extremely cool temperatures (500-3000 K). Radio observations of ultracool dwarfs indicate the presence of magnetic field strengths on the order of ~kG, however the dynamo driving these fields is not fully understood. To better understand ultracool dwarf magnetic behavior, we analyze photometric radio detections of 196 dwarfs (spectral types M7-T8), observed in the 4.5-8.5 GHz range on the Karl G. Jansky Very Large Array (VLA) and the Australia Telescope Compact Array (ATCA). The measurements in our sample are mostly upper limits, along with a small percentage of confirmed detections. The detections have both large uncertainties and high intrinsic scatter. Using Bayesian analysis to fully take advantage of the information available in these inherently uncertain measurements, we search for trends in radio luminosity as a function of several fundamental parameters: spectral type, effective temperature, and rotation rate. In this poster, we present the preliminary results of our efforts to investigate the possibility of subpopulations with different magnetic characteristics using Gaussian mixture models.
NASA Astrophysics Data System (ADS)
Santra, Tapesh; Delatola, Eleni Ioanna
2016-07-01
Presence of considerable noise and missing data points make analysis of mass-spectrometry (MS) based proteomic data a challenging task. The missing values in MS data are caused by the inability of MS machines to reliably detect proteins whose abundances fall below the detection limit. We developed a Bayesian algorithm that exploits this knowledge and uses missing data points as a complementary source of information to the observed protein intensities in order to find differentially expressed proteins by analysing MS based proteomic data. We compared its accuracy with many other methods using several simulated datasets. It consistently outperformed other methods. We then used it to analyse proteomic screens of a breast cancer (BC) patient cohort. It revealed large differences between the proteomic landscapes of triple negative and Luminal A, which are the most and least aggressive types of BC. Unexpectedly, majority of these differences could be attributed to the direct transcriptional activity of only seven transcription factors some of which are known to be inactive in triple negative BC. We also identified two new proteins which significantly correlated with the survival of BC patients, and therefore may have potential diagnostic/prognostic values.
NASA Astrophysics Data System (ADS)
Anderson, Christian Carl
This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete characterization of anisotropy. A novel piecewise linear model for the cyclic variation of ultrasonic backscatter from myocardium was proposed. Models of cyclic variation for 100 type 2 diabetes patients and 43 normal control subjects were constructed using Bayesian parameter estimation. Parameters determined from the model, specifically rise time and slew rate, were found to be more reliable in differentiating between subject groups than the previously employed magnitude parameter.
Dopamine reward prediction-error signalling: a two-component response
Schultz, Wolfram
2017-01-01
Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020
Howard Stauffer; Nadav Nur
2005-01-01
The papers included in the Advances in Statistics section of the Partners in Flight (PIF) 2002 Proceedings represent a small sample of statistical topics of current importance to Partners In Flight research scientists: hierarchical modeling, estimation of detection probabilities, and Bayesian applications. Sauer et al. (this volume) examines a hierarchical model...
Veturi, Yogasudha; Ritchie, Marylyn D
2018-01-01
Transcriptome-wide association studies (TWAS) have recently been employed as an approach that can draw upon the advantages of genome-wide association studies (GWAS) and gene expression studies to identify genes associated with complex traits. Unlike standard GWAS, summary level data suffices for TWAS and offers improved statistical power. Two popular TWAS methods include either (a) imputing the cis genetic component of gene expression from smaller sized studies (using multi-SNP prediction or MP) into much larger effective sample sizes afforded by GWAS - TWAS-MP or (b) using summary-based Mendelian randomization - TWAS-SMR. Although these methods have been effective at detecting functional variants, it remains unclear how extensive variability in the genetic architecture of complex traits and diseases impacts TWAS results. Our goal was to investigate the different scenarios under which these methods yielded enough power to detect significant expression-trait associations. In this study, we conducted extensive simulations based on 6000 randomly chosen, unrelated Caucasian males from Geisinger's MyCode population to compare the power to detect cis expression-trait associations (within 500 kb of a gene) using the above-described approaches. To test TWAS across varying genetic backgrounds we simulated gene expression and phenotype using different quantitative trait loci per gene and cis-expression /trait heritability under genetic models that differentiate the effect of causality from that of pleiotropy. For each gene, on a training set ranging from 100 to 1000 individuals, we either (a) estimated regression coefficients with gene expression as the response using five different methods: LASSO, elastic net, Bayesian LASSO, Bayesian spike-slab, and Bayesian ridge regression or (b) performed eQTL analysis. We then sampled with replacement 50,000, 150,000, and 300,000 individuals respectively from the testing set of the remaining 5000 individuals and conducted GWAS on each set. Subsequently, we integrated the GWAS summary statistics derived from the testing set with the weights (or eQTLs) derived from the training set to identify expression-trait associations using (a) TWAS-MP (b) TWAS-SMR (c) eQTL-based GWAS, or (d) standalone GWAS. Finally, we examined the power to detect functionally relevant genes using the different approaches under the considered simulation scenarios. In general, we observed great similarities among TWAS-MP methods although the Bayesian methods resulted in improved power in comparison to LASSO and elastic net as the trait architecture grew more complex while training sample sizes and expression heritability remained small. Finally, we observed high power under causality but very low to moderate power under pleiotropy.
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil; Hristopulos, Dionissios
2015-04-01
Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs random fields. IΕΕΕ Transactions on Information Theory, 53:4667-4467. Varouchakis, E.A. and Hristopulos, D.T. 2013. Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables. Advances in Water Resources, 52:34-49. Research supported by the project SPARTA 1591: "Development of Space-Time Random Fields based on Local Interaction Models and Applications in the Processing of Spatiotemporal Datasets". "SPARTA" is implemented under the "ARISTEIA" Action of the operational programme Education and Lifelong Learning and is co-funded by the European Social Fund (ESF) and National Resources.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
OGLE-2008-BLG-355Lb: A massive planet around a late-type star
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koshimoto, N.; Sumi, T.; Fukagawa, M.
2014-06-20
We report the discovery of a massive planet, OGLE-2008-BLG-355Lb. The light curve analysis indicates a planet:host mass ratio of q = 0.0118 ± 0.0006 at a separation of 0.877 ± 0.010 Einstein radii. We do not measure a significant microlensing parallax signal and do not have high angular resolution images that could detect the planetary host star. Therefore, we do not have a direct measurement of the host star mass. A Bayesian analysis, assuming that all host stars have equal probability to host a planet with the measured mass ratio, implies a host star mass of M{sub h}=0.37{sub −0.17}{sup +0.30}more » M{sub ⊙} and a companion of mass M{sub P}=4.6{sub −2.2}{sup +3.7}M{sub J}, at a projected separation of r{sub ⊥}=1.70{sub −0.30}{sup +0.29} AU. The implied distance to the planetary system is D {sub L} = 6.8 ± 1.1 kpc. A planetary system with the properties preferred by the Bayesian analysis may be a challenge to the core accretion model of planet formation, as the core accretion model predicts that massive planets are far more likely to form around more massive host stars. This core accretion model prediction is not consistent with our Bayesian prior of an equal probability of host stars of all masses to host a planet with the measured mass ratio. Thus, if the core accretion model prediction is right, we should expect that follow-up high angular resolution observations will detect a host star with a mass in the upper part of the range allowed by the Bayesian analysis. That is, the host would probably be a K or G dwarf.« less
NASA Astrophysics Data System (ADS)
Rumetshofer, M.; Heim, P.; Thaler, B.; Ernst, W. E.; Koch, M.; von der Linden, W.
2018-06-01
Ultrafast dynamical processes in photoexcited molecules can be observed with pump-probe measurements, in which information about the dynamics is obtained from the transient signal associated with the excited state. Background signals provoked by pump and/or probe pulses alone often obscure these excited-state signals. Simple subtraction of pump-only and/or probe-only measurements from the pump-probe measurement, as commonly applied, results in a degradation of the signal-to-noise ratio and, in the case of coincidence detection, the danger of overrated background subtraction. Coincidence measurements additionally suffer from false coincidences, requiring long data-acquisition times to keep erroneous signals at an acceptable level. Here we present a probabilistic approach based on Bayesian probability theory that overcomes these problems. For a pump-probe experiment with photoelectron-photoion coincidence detection, we reconstruct the interesting excited-state spectrum from pump-probe and pump-only measurements. This approach allows us to treat background and false coincidences consistently and on the same footing. We demonstrate that the Bayesian formalism has the following advantages over simple signal subtraction: (i) the signal-to-noise ratio is significantly increased, (ii) the pump-only contribution is not overestimated, (iii) false coincidences are excluded, (iv) prior knowledge, such as positivity, is consistently incorporated, (v) confidence intervals are provided for the reconstructed spectrum, and (vi) it is applicable to any experimental situation and noise statistics. Most importantly, by accounting for false coincidences, the Bayesian approach allows us to run experiments at higher ionization rates, resulting in a significant reduction of data acquisition times. The probabilistic approach is thoroughly scrutinized by challenging mock data. The application to pump-probe coincidence measurements on acetone molecules enables quantitative interpretations about the molecular decay dynamics and fragmentation behavior. All results underline the superiority of a consistent probabilistic approach over ad hoc estimations.
Mezlini, Aziz M; Goldenberg, Anna
2017-10-01
Discovering genetic mechanisms driving complex diseases is a hard problem. Existing methods often lack power to identify the set of responsible genes. Protein-protein interaction networks have been shown to boost power when detecting gene-disease associations. We introduce a Bayesian framework, Conflux, to find disease associated genes from exome sequencing data using networks as a prior. There are two main advantages to using networks within a probabilistic graphical model. First, networks are noisy and incomplete, a substantial impediment to gene discovery. Incorporating networks into the structure of a probabilistic models for gene inference has less impact on the solution than relying on the noisy network structure directly. Second, using a Bayesian framework we can keep track of the uncertainty of each gene being associated with the phenotype rather than returning a fixed list of genes. We first show that using networks clearly improves gene detection compared to individual gene testing. We then show consistently improved performance of Conflux compared to the state-of-the-art diffusion network-based method Hotnet2 and a variety of other network and variant aggregation methods, using randomly generated and literature-reported gene sets. We test Hotnet2 and Conflux on several network configurations to reveal biases and patterns of false positives and false negatives in each case. Our experiments show that our novel Bayesian framework Conflux incorporates many of the advantages of the current state-of-the-art methods, while offering more flexibility and improved power in many gene-disease association scenarios.
NASA Astrophysics Data System (ADS)
Kushida, N.; Kebede, F.; Feitio, P.; Le Bras, R.
2016-12-01
The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing and testing NET-VISA (Arora et al., 2013), a Bayesian automatic event detection and localization program, and evaluating its performance in a realistic operational mode. In our preliminary testing at the CTBTO, NET-VISA shows better performance than its currently operating automatic localization program. However, given CTBTO's role and its international context, a new technology should be introduced cautiously when it replaces a key piece of the automatic processing. We integrated the results of NET-VISA into the Analyst Review Station, extensively used by the analysts so that they can check the accuracy and robustness of the Bayesian approach. We expect the workload of the analysts to be reduced because of the better performance of NET-VISA in finding missed events and getting a more complete set of stations than the current system which has been operating for nearly twenty years. The results of a series of tests indicate that the expectations born from the automatic tests, which show an overall overlap improvement of 11%, meaning that the missed events rate is cut by 42%, hold for the integrated interactive module as well. New events are found by analysts, which qualify for the CTBTO Reviewed Event Bulletin, beyond the ones analyzed through the standard procedures. Arora, N., Russell, S., and Sudderth, E., NET-VISA: Network Processing Vertically Integrated Seismic Analysis, 2013, Bull. Seismol. Soc. Am., 103, 709-729.
Tsai, Tsung-Heng; Tadesse, Mahlet G.; Di Poto, Cristina; Pannell, Lewis K.; Mechref, Yehia; Wang, Yue; Ressom, Habtom W.
2013-01-01
Motivation: Liquid chromatography-mass spectrometry (LC-MS) has been widely used for profiling expression levels of biomolecules in various ‘-omic’ studies including proteomics, metabolomics and glycomics. Appropriate LC-MS data preprocessing steps are needed to detect true differences between biological groups. Retention time (RT) alignment, which is required to ensure that ion intensity measurements among multiple LC-MS runs are comparable, is one of the most important yet challenging preprocessing steps. Current alignment approaches estimate RT variability using either single chromatograms or detected peaks, but do not simultaneously take into account the complementary information embedded in the entire LC-MS data. Results: We propose a Bayesian alignment model for LC-MS data analysis. The alignment model provides estimates of the RT variability along with uncertainty measures. The model enables integration of multiple sources of information including internal standards and clustered chromatograms in a mathematically rigorous framework. We apply the model to LC-MS metabolomic, proteomic and glycomic data. The performance of the model is evaluated based on ground-truth data, by measuring correlation of variation, RT difference across runs and peak-matching performance. We demonstrate that Bayesian alignment model improves significantly the RT alignment performance through appropriate integration of relevant information. Availability and implementation: MATLAB code, raw and preprocessed LC-MS data are available at http://omics.georgetown.edu/alignLCMS.html Contact: hwr@georgetown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24013927
A Novel Ship-Tracking Method for GF-4 Satellite Sequential Images.
Yao, Libo; Liu, Yong; He, You
2018-06-22
The geostationary remote sensing satellite has the capability of wide scanning, persistent observation and operational response, and has tremendous potential for maritime target surveillance. The GF-4 satellite is the first geostationary orbit (GEO) optical remote sensing satellite with medium resolution in China. In this paper, a novel ship-tracking method in GF-4 satellite sequential imagery is proposed. The algorithm has three stages. First, a local visual saliency map based on local peak signal-to-noise ratio (PSNR) is used to detect ships in a single frame of GF-4 satellite sequential images. Second, the accuracy positioning of each potential target is realized by a dynamic correction using the rational polynomial coefficients (RPCs) and automatic identification system (AIS) data of ships. Finally, an improved multiple hypotheses tracking (MHT) algorithm with amplitude information is used to track ships by further removing the false targets, and to estimate ships’ motion parameters. The algorithm has been tested using GF-4 sequential images and AIS data. The results of the experiment demonstrate that the algorithm achieves good tracking performance in GF-4 satellite sequential images and estimates the motion information of ships accurately.
Astrometry and exoplanets in the Gaia era: a Bayesian approach to detection and parameter recovery
NASA Astrophysics Data System (ADS)
Ranalli, P.; Hobbs, D.; Lindegren, L.
2018-06-01
The Gaia mission is expected to make a significant contribution to the knowledge of exoplanet systems, both in terms of their number and of their physical properties. We develop Bayesian methods and detection criteria for orbital fitting, and revise the detectability of exoplanets in light of the in-flight properties of Gaia. Limiting ourselves to one-planet systems as a first step of the development, we simulate Gaia data for exoplanet systems over a grid of S/N, orbital period, and eccentricity. The simulations are then fit using Markov chain Monte Carlo methods. We investigate the detection rate according to three information criteria and the Δχ2. For the Δχ2, the effective number of degrees of freedom depends on the mission length. We find that the choice of the Markov chain starting point can affect the quality of the results; we therefore consider two limit possibilities: an ideal case, and a very simple method that finds the starting point assuming circular orbits. We use 6644 and 4402 simulations to assess the fraction of false positive detections in a 5 yr and in a 10 yr mission, respectively; and 4968 and 4706 simulations to assess the detection rate and how the parameters are recovered. Using Jeffreys' scale of evidence, the fraction of false positives passing a strong evidence criterion is ≲0.2% (0.6%) when considering a 5 yr (10 yr) mission and using the Akaike information criterion or the Watanabe-Akaike information criterion, and <0.02% (<0.06%) when using the Bayesian information criterion. We find that there is a 50% chance of detecting a planet with a minimum S/N = 2.3 (1.7). This sets the maximum distance to which a planet is detectable to 70 pc and 3.5 pc for a Jupiter-mass and Neptune-mass planets, respectively, assuming a 10 yr mission, a 4 au semi-major axis, and a 1 M⊙ star. We show the distribution of the accuracy and precision with which orbital parameters are recovered. The period is the orbital parameter that can be determined with the best accuracy, with a median relative difference between input and output periods of 4.2% (2.9%) assuming a 5 yr (10 yr) mission. The median accuracy of the semi-major axis of the orbit can be recovered with a median relative error of 7% (6%). The eccentricity can also be recovered with a median absolute accuracy of 0.07 (0.06).
Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.
Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari
2018-04-05
The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.
Salient object detection method based on multiple semantic features
NASA Astrophysics Data System (ADS)
Wang, Chunyang; Yu, Chunyan; Song, Meiping; Wang, Yulei
2018-04-01
The existing salient object detection model can only detect the approximate location of salient object, or highlight the background, to resolve the above problem, a salient object detection method was proposed based on image semantic features. First of all, three novel salient features were presented in this paper, including object edge density feature (EF), object semantic feature based on the convex hull (CF) and object lightness contrast feature (LF). Secondly, the multiple salient features were trained with random detection windows. Thirdly, Naive Bayesian model was used for combine these features for salient detection. The results on public datasets showed that our method performed well, the location of salient object can be fixed and the salient object can be accurately detected and marked by the specific window.
Detection of Low-order Curves in Images using Biologically-plausible Hardware
2012-09-29
the intersections of iso-eccentricity and iso-polar contours were entered into the computer via a graphics tablet . In regions where there was...functional mri . Cerebral Cortex, 7:181 – 192, 1997. [25] Jacob Feldman. Bayesian contour integration. Perception and Psychophysics, 63:1171 – 1182, 2001. [26
Space Object Detection and Tracking Within a Finite Set Statistics Framework
2017-04-13
Software for source extraction. Astronomy and Astrophysics Supplement Series, 117(2):393–404, 1996. [4] William M. Bolstad. Introduction to Bayesian...Urban, T Corbin, G Wycoff, Ulrich Bastian, Peter Schwekendiek, and A Wicenec. The tycho-2 catalogue of the 2.5 million brightest stars. Astronomy and
Two-Stage Bayesian Model Averaging in Endogenous Variable Models*
Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.
2013-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471
Two New Real-Time PCR-based Surveillance Systems for “Candidatus Liberibacter” Species Detection
USDA-ARS?s Scientific Manuscript database
We developed two novel surveillance systems for “Candidatus Liberibacter” (CL) species detection and identification. The first system is called “single tube dual primer Taq-Man PCR” (STDP). The procedure involves two sequential rounds of PCR using the CL asiaticus species-specific outer and inner pr...
Vichapong, Jitlada; Burakham, Rodjana; Srijaranai, Supalax; Grudpan, Kate
2011-07-01
A sequential injection-bead injection-lab-on-valve system was hyphenated to HPLC for online renewable micro-solid-phase extraction of carbamate insecticides. The carbamates studied were isoprocarb, methomyl, carbaryl, carbofuran, methiocarb, promecarb, and propoxur. LiChroprep(®) RP-18 beads (25-40 μm) were employed as renewable sorbent packing in a microcolumn situated inside the LOV platform mounted above the multiposition valve of the sequential injection system. The analytes sorbed by the microcolumn were eluted using 80% acetonitrile in 0.1% acetic acid before online introduction to the HPLC system. Separation was performed on an Atlantis C-18 column (4.6 × 150 mm, 5 μm) utilizing gradient elution with a flow rate of 1.0 mL/min and a detection wavelength at 270 nm. The sequential injection system offers the means of performing automated handling of sample preconcentration and matrix removal. The enrichment factors ranged between 20 and 125, leading to limits of detection (LODs) in the range of 1-20 μg/L. Good reproducibility was obtained with relative standard deviations of <0.7 and 5.4% for retention time and peak area, respectively. The developed method has been successfully applied to the determination of carbamate residues in fruit, vegetable, and water samples. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ichikawa, Shota; Kamishima, Tamotsu; Sutherland, Kenneth; Fukae, Jun; Katayama, Kou; Aoki, Yuko; Okubo, Takanobu; Okino, Taichi; Kaneda, Takahiko; Takagi, Satoshi; Tanimura, Kazuhide
2017-10-01
We have developed a refined computer-based method to detect joint space narrowing (JSN) progression with the joint space narrowing progression index (JSNPI) by superimposing sequential hand radiographs. The purpose of this study is to assess the validity of a computer-based method using images obtained from multiple institutions in rheumatoid arthritis (RA) patients. Sequential hand radiographs of 42 patients (37 females and 5 males) with RA from two institutions were analyzed by a computer-based method and visual scoring systems as a standard of reference. The JSNPI above the smallest detectable difference (SDD) defined JSN progression on the joint level. The sensitivity and specificity of the computer-based method for JSN progression was calculated using the SDD and a receiver operating characteristic (ROC) curve. Out of 314 metacarpophalangeal joints, 34 joints progressed based on the SDD, while 11 joints widened. Twenty-one joints progressed in the computer-based method, 11 joints in the scoring systems, and 13 joints in both methods. Based on the SDD, we found lower sensitivity and higher specificity with 54.2 and 92.8%, respectively. At the most discriminant cutoff point according to the ROC curve, the sensitivity and specificity was 70.8 and 81.7%, respectively. The proposed computer-based method provides quantitative measurement of JSN progression using sequential hand radiographs and may be a useful tool in follow-up assessment of joint damage in RA patients.
Gaudry, Adam J; Nai, Yi Heng; Guijt, Rosanne M; Breadmore, Michael C
2014-04-01
A dual-channel sequential injection microchip capillary electrophoresis system with pressure-driven injection is demonstrated for simultaneous separations of anions and cations from a single sample. The poly(methyl methacrylate) (PMMA) microchips feature integral in-plane contactless conductivity detection electrodes. A novel, hydrodynamic "split-injection" method utilizes background electrolyte (BGE) sheathing to gate the sample flows, while control over the injection volume is achieved by balancing hydrodynamic resistances using external hydrodynamic resistors. Injection is realized by a unique flow-through interface, allowing for automated, continuous sampling for sequential injection analysis by microchip electrophoresis. The developed system was very robust, with individual microchips used for up to 2000 analyses with lifetimes limited by irreversible blockages of the microchannels. The unique dual-channel geometry was demonstrated by the simultaneous separation of three cations and three anions in individual microchannels in under 40 s with limits of detection (LODs) ranging from 1.5 to 24 μM. From a series of 100 sequential injections the %RSDs were determined for every fifth run, resulting in %RSDs for migration times that ranged from 0.3 to 0.7 (n = 20) and 2.3 to 4.5 for peak area (n = 20). This system offers low LODs and a high degree of reproducibility and robustness while the hydrodynamic injection eliminates electrokinetic bias during injection, making it attractive for a wide range of rapid, sensitive, and quantitative online analytical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J V; Chambers, D H; Breitfeller, E F
2010-03-02
The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representationmore » of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.« less
Genetic basis of climatic adaptation in scots pine by bayesian quantitative trait locus analysis.
Hurme, P; Sillanpää, M J; Arjas, E; Repo, T; Savolainen, O
2000-01-01
We examined the genetic basis of large adaptive differences in timing of bud set and frost hardiness between natural populations of Scots pine. As a mapping population, we considered an "open-pollinated backcross" progeny by collecting seeds of a single F(1) tree (cross between trees from southern and northern Finland) growing in southern Finland. Due to the special features of the design (no marker information available on grandparents or the father), we applied a Bayesian quantitative trait locus (QTL) mapping method developed previously for outcrossed offspring. We found four potential QTL for timing of bud set and seven for frost hardiness. Bayesian analyses detected more QTL than ANOVA for frost hardiness, but the opposite was true for bud set. These QTL included alleles with rather large effects, and additionally smaller QTL were supported. The largest QTL for bud set date accounted for about a fourth of the mean difference between populations. Thus, natural selection during adaptation has resulted in selection of at least some alleles of rather large effect. PMID:11063704
Path integration mediated systematic search: a Bayesian model.
Vickerstaff, Robert J; Merkle, Tobias
2012-08-21
The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.
Estimating Bayesian Phylogenetic Information Content
Lewis, Paul O.; Chen, Ming-Hui; Kuo, Lynn; Lewis, Louise A.; Fučíková, Karolina; Neupane, Suman; Wang, Yu-Bo; Shi, Daoyuan
2016-01-01
Measuring the phylogenetic information content of data has a long history in systematics. Here we explore a Bayesian approach to information content estimation. The entropy of the posterior distribution compared with the entropy of the prior distribution provides a natural way to measure information content. If the data have no information relevant to ranking tree topologies beyond the information supplied by the prior, the posterior and prior will be identical. Information in data discourages consideration of some hypotheses allowed by the prior, resulting in a posterior distribution that is more concentrated (has lower entropy) than the prior. We focus on measuring information about tree topology using marginal posterior distributions of tree topologies. We show that both the accuracy and the computational efficiency of topological information content estimation improve with use of the conditional clade distribution, which also allows topological information content to be partitioned by clade. We explore two important applications of our method: providing a compelling definition of saturation and detecting conflict among data partitions that can negatively affect analyses of concatenated data. [Bayesian; concatenation; conditional clade distribution; entropy; information; phylogenetics; saturation.] PMID:27155008
Using robust Bayesian network to estimate the residuals of fluoroquinolone antibiotic in soil.
Li, Xuewen; Xie, Yunfeng; Li, Lianfa; Yang, Xunfeng; Wang, Ning; Wang, Jinfeng
2015-11-01
Prediction of antibiotic pollution and its consequences is difficult, due to the uncertainties and complexities associated with multiple related factors. This article employed domain knowledge and spatial data to construct a Bayesian network (BN) model to assess fluoroquinolone antibiotic (FQs) pollution in the soil of an intensive vegetable cultivation area. The results show: (1) The relationships between FQs pollution and contributory factors: Three factors (cultivation methods, crop rotations, and chicken manure types) were consistently identified as predictors in the topological structures of three FQs, indicating their importance in FQs pollution; deduced with domain knowledge, the cultivation methods are determined by the crop rotations, which require different nutrients (derived from the manure) according to different plant biomass. (2) The performance of BN model: The integrative robust Bayesian network model achieved the highest detection probability (pd) of high-risk and receiver operating characteristic (ROC) area, since it incorporates domain knowledge and model uncertainty. Our encouraging findings have implications for the use of BN as a robust approach to assessment of FQs pollution and for informing decisions on appropriate remedial measures.
Bayesian linearized amplitude-versus-frequency inversion for quality factor and its application
NASA Astrophysics Data System (ADS)
Yang, Xinchao; Teng, Long; Li, Jingnan; Cheng, Jiubing
2018-06-01
We propose a straightforward attenuation inversion method by utilizing the amplitude-versus-frequency (AVF) characteristics of seismic data. A new linearized approximation equation of the angle and frequency dependent reflectivity in viscoelastic media is derived. We then use the presented equation to implement the Bayesian linear AVF inversion. The inversion result includes not only P-wave and S-wave velocities, and densities, but also P-wave and S-wave quality factors. Synthetic tests show that the AVF inversion surpasses the AVA inversion for quality factor estimation. However, a higher signal noise ratio (SNR) of data is necessary for the AVF inversion. To show its feasibility, we apply both the new Bayesian AVF inversion and conventional AVA inversion to a tight gas reservoir data in Sichuan Basin in China. Considering the SNR of the field data, a combination of AVF inversion for attenuation parameters and AVA inversion for elastic parameters is recommended. The result reveals that attenuation estimations could serve as a useful complement in combination with the AVA inversion results for the detection of tight gas reservoirs.
Sequential Multiplex Analyte Capturing for Phosphoprotein Profiling*
Poetz, Oliver; Henzler, Tanja; Hartmann, Michael; Kazmaier, Cornelia; Templin, Markus F.; Herget, Thomas; Joos, Thomas O.
2010-01-01
Microarray-based sandwich immunoassays can simultaneously detect dozens of proteins. However, their use in quantifying large numbers of proteins is hampered by cross-reactivity and incompatibilities caused by the immunoassays themselves. Sequential multiplex analyte capturing addresses these problems by repeatedly probing the same sample with different sets of antibody-coated, magnetic suspension bead arrays. As a miniaturized immunoassay format, suspension bead array-based assays fulfill the criteria of the ambient analyte theory, and our experiments reveal that the analyte concentrations are not significantly changed. The value of sequential multiplex analyte capturing was demonstrated by probing tumor cell line lysates for the abundance of seven different receptor tyrosine kinases and their degree of phosphorylation and by measuring the complex phosphorylation pattern of the epidermal growth factor receptor in the same sample from the same cavity. PMID:20682761
VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2015-05-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).
A Bayesian method for detecting stellar flares
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2014-12-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
NASA Technical Reports Server (NTRS)
Mitz, M. A.
1972-01-01
Some promising newer approaches for detecting microorganisms are discussed, giving particular attention to the integration of different methods into a single instrument. Life detection methods may be divided into biological, chemical, and cytological methods. Biological methods are based on the biological properties of assimilation, metabolism, and growth. Devices for the detection of organic materials are considered, taking into account an instrument which volatilizes, separates, and analyzes a sample sequentially. Other instrumental systems described make use of a microscope and the cytochemical staining principle.
Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence
2010-11-09
Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.
Bayesian LASSO, scale space and decision making in association genetics.
Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J
2015-01-01
LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.
Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio
2017-09-20
Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Blunt pancreatic trauma: A persistent diagnostic conundrum?
Kumar, Atin; Panda, Ananya; Gamanagatti, Shivanand
2016-01-01
Blunt pancreatic trauma is an uncommon injury but has high morbidity and mortality. In modern era of trauma care, pancreatic trauma remains a persistent challenge to radiologists and surgeons alike. Early detection of pancreatic trauma is essential to prevent subsequent complications. However early pancreatic injury is often subtle on computed tomography (CT) and can be missed unless specifically looked for. Signs of pancreatic injury on CT include laceration, transection, bulky pancreas, heterogeneous enhancement, peripancreatic fluid and signs of pancreatitis. Pan-creatic ductal injury is a vital decision-making parameter as ductal injury is an indication for laparotomy. While lacerations involving more than half of pancreatic parenchyma are suggestive of ductal injury on CT, ductal injuries can be directly assessed on magnetic resonance imaging (MRI) or encoscopic retrograde cholangio-pancreatography. Pancreatic trauma also shows temporal evolution with increase in extent of injury with time. Hence early CT scans may underestimate the extent of injures and sequential imaging with CT or MRI is important in pancreatic trauma. Sequential imaging is also needed for successful non-operative management of pancreatic injury. Accurate early detection on initial CT and adopting a multimodality and sequential imaging strategy can improve outcome in pancreatic trauma. PMID:26981225
Visual detection and sequential injection determination of aluminium using a cinnamoyl derivative.
Elečková, Lenka; Alexovič, Michal; Kuchár, Juraj; Balogh, Ioseph S; Andruch, Vasil
2015-02-01
A cinnamoyl derivative, 3-[4-(dimethylamino)cinnamoyl]-4-hydroxy-6-methyl-3,4-2H-pyran-2-one, was used as a ligand for the determination of aluminium. Upon the addition of an acetonitrile solution of the ligand to an aqueous solution containing Al(III) and a buffer solution at pH 8, a marked change in colour from yellow to orange is observed. The colour intensity is proportional to the concentration of Al(III); thus, the 'naked-eye' detection of aluminium is possible. The reaction is also applied for sequential injection determination of aluminium. Beer׳s law is obeyed in the range from 0.055 to 0.66 mg L(-1) of Al(III). The limit of detection, calculated as three times the standard deviation of the blank test (n=10), was found to be 4 μg L(-1) for Al(III). The method was applied for the determination of aluminium in spiked water samples and pharmaceutical preparations. Copyright © 2014 Elsevier B.V. All rights reserved.
Ramírez-Segovia, A S; Banda-Alemán, J A; Gutiérrez-Granados, S; Rodríguez, A; Rodríguez, F J; Godínez, Luis A; Bustos, E; Manríquez, J
2014-02-17
Glassy carbon electrodes (GCE) were sequentially modified by cysteamine-capped gold nanoparticles (AuNp@cysteamine) and PAMAM dendrimers generation 4.5 bearing 128-COOH peripheral groups (GCE/AuNp@cysteamine/PAMAM), in order to explore their capabilities as electrochemical detectors of uric acid (UA) in human serum samples at pH 2. The results showed that concentrations of UA detected by cyclic voltammetry with GCE/AuNp@cysteamine/PAMAM were comparable (deviation <±10%; limits of detection (LOD) and quantification (LOQ) were 1.7×10(-4) and 5.8×10(-4) mg dL(-1), respectively) to those concentrations obtained using the uricase-based enzymatic-colorimetric method. It was also observed that the presence of dendrimers in the GCE/AuNp@cysteamine/PAMAM system minimizes ascorbic acid (AA) interference during UA oxidation, thus improving the electrocatalytic activity of the gold nanoparticles. Copyright © 2013 Elsevier B.V. All rights reserved.
Schmitz, Audrey; Le Bras, Marie-Odile; Guillemoto, Carole; Pierre, Isabelle; Rose, Nicolas; Bougeard, Stéphanie; Jestin, Véronique
2013-10-01
Following the emergence of highly pathogenic avian influenza (AI), active surveillance of infections due to the H5 and H7 subtypes in poultry has increased and been made compulsory in Europe since 2002, by means of annual serological surveys using the haemagglutination inhibition (HI) test. Domestic anseriforms, particularly ducks and geese, are more frequently infected by H5 low pathogenic AI virus, often subclinically, and represent a threat for other terrestrial poultry. 1783 sera, mainly from ducks, have been used to evaluate and compare a commercial ELISA kit detecting H5 antibodies with the currently recommended HI test. Different approaches to calculating specificity and sensitivity have been used, including the original Bayesian method. Results were similar when data were analyzed at the individual and batch levels, and when using different methods of calculation. However, results showed that H5 ELISA had both a higher sensitivity and a lower specificity than the HI test. Given that sensitivity is the most important factor for a screening test, H5 ELISA could therefore be recommended for AI surveillance, followed in cases of positivity by molecular tests aimed at detecting the virus gene. Copyright © 2013 Elsevier B.V. All rights reserved.
Bayesian mixture analysis for metagenomic community profiling.
Morfopoulou, Sofia; Plagnol, Vincent
2015-09-15
Deep sequencing of clinical samples is now an established tool for the detection of infectious pathogens, with direct medical applications. The large amount of data generated produces an opportunity to detect species even at very low levels, provided that computational tools can effectively profile the relevant metagenomic communities. Data interpretation is complicated by the fact that short sequencing reads can match multiple organisms and by the lack of completeness of existing databases, in particular for viral pathogens. Here we present metaMix, a Bayesian mixture model framework for resolving complex metagenomic mixtures. We show that the use of parallel Monte Carlo Markov chains for the exploration of the species space enables the identification of the set of species most likely to contribute to the mixture. We demonstrate the greater accuracy of metaMix compared with relevant methods, particularly for profiling complex communities consisting of several related species. We designed metaMix specifically for the analysis of deep transcriptome sequencing datasets, with a focus on viral pathogen detection; however, the principles are generally applicable to all types of metagenomic mixtures. metaMix is implemented as a user friendly R package, freely available on CRAN: http://cran.r-project.org/web/packages/metaMix sofia.morfopoulou.10@ucl.ac.uk Supplementary data are available at Bionformatics online. © The Author 2015. Published by Oxford University Press.
Jaramillo, Diana; Dürr, Salome; Hick, Paul; Whittington, Richard
2016-01-01
Diagnosis of nervous necrosis virus (NNV) infection in susceptible fish species is mostly performed post-mortem due to the neurotropism of the causative agent and the only validated diagnostic assays require samples from brain and retinal tissue. However, a non-lethal alternative to test for exposure of fish to NNV is needed. An indirect ELISA for the detection of anti-NNV antibodies in was recently developed and evaluated to detect responses in the sera from immunized fish. For this study, we assessed the accuracy of the assay at detecting specific antibodies from naturally exposed fish using field samples from populations with differing infection status. We applied a Bayesian model, using RTqPCR as a second test. Median estimates of the diagnostic sensitivity and specificity of the VNN ELISA were 81.8% and 86.7%, respectively. We concluded that the assay was fit for the purpose of identifying animals in naturally exposed populations. With further evaluation in larger populations the test might be used to inform implementation of control measures, and for estimating infection prevalence to facilitate risk analysis. To our knowledge this is the first report on the diagnostic accuracy of an antibody ELISA for an infectious disease in finfish. Copyright © 2015 Elsevier B.V. All rights reserved.
Stewart, David R.; Long, James M.
2015-01-01
Species distribution models are useful tools to evaluate habitat relationships of fishes. We used hierarchical Bayesian multispecies mixture models to evaluate the relationships of both detection and abundance with habitat of reservoir fishes caught using tandem hoop nets. A total of 7,212 fish from 12 species were captured, and the majority of the catch was composed of Channel Catfish Ictalurus punctatus (46%), Bluegill Lepomis macrochirus(25%), and White Crappie Pomoxis annularis (14%). Detection estimates ranged from 8% to 69%, and modeling results suggested that fishes were primarily influenced by reservoir size and context, water clarity and temperature, and land-use types. Species were differentially abundant within and among habitat types, and some fishes were found to be more abundant in turbid, less impacted (e.g., by urbanization and agriculture) reservoirs with longer shoreline lengths; whereas, other species were found more often in clear, nutrient-rich impoundments that had generally shorter shoreline length and were surrounded by a higher percentage of agricultural land. Our results demonstrated that habitat and reservoir characteristics may differentially benefit species and assemblage structure. This study provides a useful framework for evaluating capture efficiency for not only hoop nets but other gear types used to sample fishes in reservoirs.
The need to assess large numbers of chemicals for their potential toxicities has resulted in increased emphasis on medium- and high-throughput in vitro screening approaches. For such approaches to be useful, efficient and reliable data analysis and hit detection methods are also ...
Multi-Step Attack Detection via Bayesian Modeling under Model Parameter Uncertainty
ERIC Educational Resources Information Center
Cole, Robert
2013-01-01
Organizations in all sectors of business have become highly dependent upon information systems for the conduct of business operations. Of necessity, these information systems are designed with many points of ingress, points of exposure that can be leveraged by a motivated attacker seeking to compromise the confidentiality, integrity or…
Buried landmine detection using multivariate normal clustering
NASA Astrophysics Data System (ADS)
Duston, Brian M.
2001-10-01
A Bayesian classification algorithm is presented for discriminating buried land mines from buried and surface clutter in Ground Penetrating Radar (GPR) signals. This algorithm is based on multivariate normal (MVN) clustering, where feature vectors are used to identify populations (clusters) of mines and clutter objects. The features are extracted from two-dimensional images created from ground penetrating radar scans. MVN clustering is used to determine the number of clusters in the data and to create probability density models for target and clutter populations, producing the MVN clustering classifier (MVNCC). The Bayesian Information Criteria (BIC) is used to evaluate each model to determine the number of clusters in the data. An extension of the MVNCC allows the model to adapt to local clutter distributions by treating each of the MVN cluster components as a Poisson process and adaptively estimating the intensity parameters. The algorithm is developed using data collected by the Mine Hunter/Killer Close-In Detector (MH/K CID) at prepared mine lanes. The Mine Hunter/Killer is a prototype mine detecting and neutralizing vehicle developed for the U.S. Army to clear roads of anti-tank mines.
Zhao, Yang; Zheng, Wei; Zhuo, Daisy Y; Lu, Yuefeng; Ma, Xiwen; Liu, Hengchang; Zeng, Zhen; Laird, Glen
2017-10-11
Personalized medicine, or tailored therapy, has been an active and important topic in recent medical research. Many methods have been proposed in the literature for predictive biomarker detection and subgroup identification. In this article, we propose a novel decision tree-based approach applicable in randomized clinical trials. We model the prognostic effects of the biomarkers using additive regression trees and the biomarker-by-treatment effect using a single regression tree. Bayesian approach is utilized to periodically revise the split variables and the split rules of the decision trees, which provides a better overall fitting. Gibbs sampler is implemented in the MCMC procedure, which updates the prognostic trees and the interaction tree separately. We use the posterior distribution of the interaction tree to construct the predictive scores of the biomarkers and to identify the subgroup where the treatment is superior to the control. Numerical simulations show that our proposed method performs well under various settings comparing to existing methods. We also demonstrate an application of our method in a real clinical trial.
Sidibé, Cheick Abou Kounta; Grosbois, Vladimir; Thiaucourt, François; Niang, Mamadou; Lesnoff, Matthieu; Roger, François
2012-08-01
A Bayesian approach, allowing for conditional dependence between two tests was used to estimate without gold standard the sensitivities of complement fixation test (CFT) and competitive enzyme-linked immunosorbent assay test (cELISA) and the serological prevalence of CBPP in a cattle population of the Central Delta of the Niger River in Mali, where CBPP is enzootic and the true prevalence and animals serological state were unknown. A significant difference (P = 0.99) was observed between the sensitivities of the two tests, estimated at 73.7% (95% probability interval [PI], 63.4-82.7) for cELISA and 42.3% (95% PI, 33.3-53.7) for CFT. Individual-level serological prevalence in the study population was estimated at 14.1% (95% PI, 10.8-16.9). Our results indicate that in enzootic areas, cELISA performs better in terms of sensitivity than CFT. However, negative conditional sensitivity dependence between the two tests was detected, implying that to achieve maximum sensitivity, the two tests should be applied in parallel.
NASA Astrophysics Data System (ADS)
Morse, Llewellyn; Sharif Khodaei, Zahra; Aliabadi, M. H.
2018-01-01
In this work, a reliability based impact detection strategy for a sensorized composite structure is proposed. Impacts are localized using Artificial Neural Networks (ANNs) with recorded guided waves due to impacts used as inputs. To account for variability in the recorded data under operational conditions, Bayesian updating and Kalman filter techniques are applied to improve the reliability of the detection algorithm. The possibility of having one or more faulty sensors is considered, and a decision fusion algorithm based on sub-networks of sensors is proposed to improve the application of the methodology to real structures. A strategy for reliably categorizing impacts into high energy impacts, which are probable to cause damage in the structure (true impacts), and low energy non-damaging impacts (false impacts), has also been proposed to reduce the false alarm rate. The proposed strategy involves employing classification ANNs with different features extracted from captured signals used as inputs. The proposed methodologies are validated by experimental results on a quasi-isotropic composite coupon impacted with a range of impact energies.
Detection of urinary extravasation by delayed technetium-99m DTPA renal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taki, J.; Tonami, N.; Aburano, T.
Delayed imaging with Tc-99m DTPA renal scintigraphy demonstrated urinary extravasation in a patient with acute anuria in whom early sequential imaging showed no abnormal extrarenal radionuclide accumulation.
High statistics study of in-medium S- and P-wave quarkonium states in lattice Non-relativistic QCD
NASA Astrophysics Data System (ADS)
Kim, S.; Petreczky, P.; Rothkopf, A.
2017-11-01
Many measurements of quarkonium suppression at the LHC, e.g. the nuclear modification factor RAA of J / Ψ, are well described by a multitude of different models. Thus pinpointing the underlying physics aspects is difficult and guidance based on first principles is needed. Here we present the current status of our ongoing high precision study of in-medium spectral properties of both bottomonium and charmonium based on NRQCD on the lattice. This effective field theory allows us to capture the physics of quarkonium without modeling assumptions in a thermal QCD medium. In our study a first principles and realistic description of the QCD medium is provided by state-of-the-art lattices of the HotQCD collaboration at almost physical pion mass. Our updated results corroborate a picture of sequential modification of states with respect to their vacuum binding energy. Using a novel low-gain variant of the Bayesian BR method for reconstructing spectral functions we find that remnant features of the Upsilon may survive up to T ∼ 400MeV, while the χb signal disappears around T ∼ 270MeV. The c c ‾ analysis hints at melting of χc below T ∼ 190MeV while some J / Ψ remnant feature might survive up to T ∼ 245MeV. An improved understanding of the numerical artifacts in the Bayesian approach and the availability of increased statistics have made possible a first quantitative study of the in-medium ground state masses, which tend to lower values as T increases, consistent with lattice potential based studies.
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Of bits and wows: A Bayesian theory of surprise with applications to attention.
Baldi, Pierre; Itti, Laurent
2010-06-01
The amount of information contained in a piece of data can be measured by the effect this data has on its observer. Fundamentally, this effect is to transform the observer's prior beliefs into posterior beliefs, according to Bayes theorem. Thus the amount of information can be measured in a natural way by the distance (relative entropy) between the prior and posterior distributions of the observer over the available space of hypotheses. This facet of information, termed "surprise", is important in dynamic situations where beliefs change, in particular during learning and adaptation. Surprise can often be computed analytically, for instance in the case of distributions from the exponential family, or it can be numerically approximated. During sequential Bayesian learning, surprise decreases as the inverse of the number of training examples. Theoretical properties of surprise are discussed, in particular how it differs and complements Shannon's definition of information. A computer vision neural network architecture is then presented capable of computing surprise over images and video stimuli. Hypothesizing that surprising data ought to attract natural or artificial attention systems, the output of this architecture is used in a psychophysical experiment to analyze human eye movements in the presence of natural video stimuli. Surprise is found to yield robust performance at predicting human gaze (ROC-like ordinal dominance score approximately 0.7 compared to approximately 0.8 for human inter-observer repeatability, approximately 0.6 for simpler intensity contrast-based predictor, and 0.5 for chance). The resulting theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction. Copyright 2010 Elsevier Ltd. All rights reserved.
Individual differences in attention influence perceptual decision making.
Nunez, Michael D; Srinivasan, Ramesh; Vandekerckhove, Joachim
2015-01-01
Sequential sampling decision-making models have been successful in accounting for reaction time (RT) and accuracy data in two-alternative forced choice tasks. These models have been used to describe the behavior of populations of participants, and explanatory structures have been proposed to account for between individual variability in model parameters. In this study we show that individual differences in behavior from a novel perceptual decision making task can be attributed to (1) differences in evidence accumulation rates, (2) differences in variability of evidence accumulation within trials, and (3) differences in non-decision times across individuals. Using electroencephalography (EEG), we demonstrate that these differences in cognitive variables, in turn, can be explained by attentional differences as measured by phase-locking of steady-state visual evoked potential (SSVEP) responses to the signal and noise components of the visual stimulus. Parameters of a cognitive model (a diffusion model) were obtained from accuracy and RT distributions and related to phase-locking indices (PLIs) of SSVEPs with a single step in a hierarchical Bayesian framework. Participants who were able to suppress the SSVEP response to visual noise in high frequency bands were able to accumulate correct evidence faster and had shorter non-decision times (preprocessing or motor response times), leading to more accurate responses and faster response times. We show that the combination of cognitive modeling and neural data in a hierarchical Bayesian framework relates physiological processes to the cognitive processes of participants, and that a model with a new (out-of-sample) participant's neural data can predict that participant's behavior more accurately than models without physiological data.
Lenehan, Claire E.; Lewis, Simon W.
2002-01-01
LabVIEW®-based software for the automation of a sequential injection analysis instrument for the determination of morphine is presented. Detection was based on its chemiluminescence reaction with acidic potassium permanganate in the presence of sodium polyphosphate. The calibration function approximated linearity (range 5 × 10-10 to 5 × 10-6 M) with a line of best fit of y=1.05x+8.9164 (R2 =0.9959), where y is the log10 signal (mV) and x is the log10 morphine concentration (M). Precision, as measured by relative standard deviation, was 0.7% for five replicate analyses of morphine standard (5 × 10-8 M). The limit of detection (3σ) was determined as 5 × 10-11 M morphine. PMID:18924729
Lenehan, Claire E; Barnett, Neil W; Lewis, Simon W
2002-01-01
LabVIEW-based software for the automation of a sequential injection analysis instrument for the determination of morphine is presented. Detection was based on its chemiluminescence reaction with acidic potassium permanganate in the presence of sodium polyphosphate. The calibration function approximated linearity (range 5 x 10(-10) to 5 x 10(-6) M) with a line of best fit of y=1.05(x)+8.9164 (R(2) =0.9959), where y is the log10 signal (mV) and x is the log10 morphine concentration (M). Precision, as measured by relative standard deviation, was 0.7% for five replicate analyses of morphine standard (5 x 10(-8) M). The limit of detection (3sigma) was determined as 5 x 10(-11) M morphine.
Kunimoto, Yasuomi; Hasegawa, Kensaku; Arii, Shiro; Kataoka, Hideyuki; Yazama, Hiroaki; Kuya, Junko; Kitano, Hiroya
2014-04-01
Numerous studies have reported sound-induced motion of the tympanic membrane (TM). To demonstrate sequential motion characteristics of the entire TM by noncontact laser Doppler vibrometry (LDV), we have investigated multipoint TM measurement. A laser Doppler vibrometer was mounted on a surgical microscope. The velocity was measured at 33 points on the TM using noncontact LDV without any reflectors. Measurements were performed with tonal stimuli of 1, 3, and 6 kHz. Amplitudes were calculated from these measurements, and time-dependent changes in TM motion were described using a graphics application. TM motions were detected more clearly and stably at 1 and 3 kHz than at other frequencies. This is because the external auditory canal acted as a resonant tube near 3 kHz. TM motion displayed 1 peak at 1 kHz and 2 peaks at 3 kHz. Large amplitudes were detected in the posterosuperior quadrant (PSQ) at 1 kHz and in the PSQ and anteroinferior quadrant (AIQ) at 3 kHz. The entire TM showed synchronized movement centered on the PSQ at 1 kHz, with phase-shifting between PSQ and AIQ movement at 3 kHz. Amplitude was smaller at the umbo than at other parts. In contrast, amplitudes at high frequencies were too small and complicated to detect any obvious peaks. Sequential multipoint motion of the tympanic membrane showed that vibration characteristics of the TM differ according to the part and frequency.
NASA Astrophysics Data System (ADS)
Li, D.
2016-12-01
Sudden water pollution accidents are unavoidable risk events that we must learn to co-exist with. In China's Taihu River Basin, the river flow conditions are complicated with frequently artificial interference. Sudden water pollution accident occurs mainly in the form of a large number of abnormal discharge of wastewater, and has the characteristics with the sudden occurrence, the uncontrollable scope, the uncertainty object and the concentrated distribution of many risk sources. Effective prevention of pollution accidents that may occur is of great significance for the water quality safety management. Bayesian networks can be applied to represent the relationship between pollution sources and river water quality intuitively. Using the time sequential Monte Carlo algorithm, the pollution sources state switching model, water quality model for river network and Bayesian reasoning is integrated together, and the sudden water pollution risk assessment model for river network is developed to quantify the water quality risk under the collective influence of multiple pollution sources. Based on the isotope water transport mechanism, a dynamic tracing model of multiple pollution sources is established, which can describe the relationship between the excessive risk of the system and the multiple risk sources. Finally, the diagnostic reasoning algorithm based on Bayesian network is coupled with the multi-source tracing model, which can identify the contribution of each risk source to the system risk under the complex flow conditions. Taking Taihu Lake water system as the research object, the model is applied to obtain the reasonable results under the three typical years. Studies have shown that the water quality risk at critical sections are influenced by the pollution risk source, the boundary water quality, the hydrological conditions and self -purification capacity, and the multiple pollution sources have obvious effect on water quality risk of the receiving water body. The water quality risk assessment approach developed in this study offers a effective tool for systematically quantifying the random uncertainty in plain river network system, and it also provides the technical support for the decision-making of controlling the sudden water pollution through identification of critical pollution sources.
New Approaches For Asteroid Spin State and Shape Modeling From Delay-Doppler Radar Images
NASA Astrophysics Data System (ADS)
Raissi, Chedy; Lamee, Mehdi; Mosiane, Olorato; Vassallo, Corinne; Busch, Michael W.; Greenberg, Adam; Benner, Lance A. M.; Naidu, Shantanu P.; Duong, Nicholas
2016-10-01
Delay-Doppler radar imaging is a powerful technique to characterize the trajectories, shapes, and spin states of near-Earth asteroids; and has yielded detailed models of dozens of objects. Reconstructing objects' shapes and spins from delay-Doppler data is a computationally intensive inversion problem. Since the 1990s, delay-Doppler data has been analyzed using the SHAPE software. SHAPE performs sequential single-parameter fitting, and requires considerable computer runtime and human intervention (Hudson 1993, Magri et al. 2007). Recently, multiple-parameter fitting algorithms have been shown to more efficiently invert delay-Doppler datasets (Greenberg & Margot 2015) - decreasing runtime while improving accuracy. However, extensive human oversight of the shape modeling process is still required. We have explored two new techniques to better automate delay-Doppler shape modeling: Bayesian optimization and a machine-learning neural network.One of the most time-intensive steps of the shape modeling process is to perform a grid search to constrain the target's spin state. We have implemented a Bayesian optimization routine that uses SHAPE to autonomously search the space of spin-state parameters. To test the efficacy of this technique, we compared it to results with human-guided SHAPE for asteroids 1992 UY4, 2000 RS11, and 2008 EV5. Bayesian optimization yielded similar spin state constraints within a factor of 3 less computer runtime.The shape modeling process could be further accelerated using a deep neural network to replace iterative fitting. We have implemented a neural network with a variational autoencoder (VAE), using a subset of known asteroid shapes and a large set of synthetic radar images as inputs to train the network. Conditioning the VAE in this manner allows the user to give the network a set of radar images and get a 3D shape model as an output. Additional development will be required to train a network to reliably render shapes from delay-Doppler images.This work was supported by NASA Ames, NVIDIA, Autodesk and the SETI Institute as part of the NASA Frontier Development Lab program.
Cunanan, Kristen M; Carlin, Bradley P; Peterson, Kevin A
2016-12-01
Many clinical trial designs are impractical for community-based clinical intervention trials. Stepped wedge trial designs provide practical advantages, but few descriptions exist of their clinical implementational features, statistical design efficiencies, and limitations. Enhance efficiency of stepped wedge trial designs by evaluating the impact of design characteristics on statistical power for the British Columbia Telehealth Trial. The British Columbia Telehealth Trial is a community-based, cluster-randomized, controlled clinical trial in rural and urban British Columbia. To determine the effect of an Internet-based telehealth intervention on healthcare utilization, 1000 subjects with an existing diagnosis of congestive heart failure or type 2 diabetes will be enrolled from 50 clinical practices. Hospital utilization is measured using a composite of disease-specific hospital admissions and emergency visits. The intervention comprises online telehealth data collection and counseling provided to support a disease-specific action plan developed by the primary care provider. The planned intervention is sequentially introduced across all participating practices. We adopt a fully Bayesian, Markov chain Monte Carlo-driven statistical approach, wherein we use simulation to determine the effect of cluster size, sample size, and crossover interval choice on type I error and power to evaluate differences in hospital utilization. For our Bayesian stepped wedge trial design, simulations suggest moderate decreases in power when crossover intervals from control to intervention are reduced from every 3 to 2 weeks, and dramatic decreases in power as the numbers of clusters decrease. Power and type I error performance were not notably affected by the addition of nonzero cluster effects or a temporal trend in hospitalization intensity. Stepped wedge trial designs that intervene in small clusters across longer periods can provide enhanced power to evaluate comparative effectiveness, while offering practical implementation advantages in geographic stratification, temporal change, use of existing data, and resource distribution. Current population estimates were used; however, models may not reflect actual event rates during the trial. In addition, temporal or spatial heterogeneity can bias treatment effect estimates. © The Author(s) 2016.
Andritsos, Nikolaos D; Mataragas, Marios; Paramithiotis, Spiros; Drosinos, Eleftherios H
2013-12-01
Listeria monocytogenes poses a serious threat to public health, and the majority of cases of human listeriosis are associated with contaminated food. Reliable microbiological testing is needed for effective pathogen control by food industry and competent authorities. The aims of this work were to estimate the prevalence and concentration of L. monocytogenes in minced pork meat by the application of a Bayesian modeling approach, and also to determine the performance of three culture media commonly used for detecting L. monocytogenes in foods from a deterministic and stochastic perspective. Samples (n = 100) collected from local markets were tested for L. monocytogenes using in parallel the PALCAM, ALOA and RAPID'L.mono selective media according to ISO 11290-1:1996 and 11290-2:1998 methods. Presence of the pathogen was confirmed by conducting biochemical and molecular tests. Independent experiments (n = 10) for model validation purposes were performed. Performance attributes were calculated from the presence-absence microbiological test results by combining the results obtained from the culture media and confirmative tests. Dirichlet distribution, the multivariate expression of a Beta distribution, was used to analyze the performance data from a stochastic perspective. No L. monocytogenes was enumerated by direct-plating (<10 CFU/g), though the pathogen was detected in 22% of the samples. L. monocytogenes concentration was estimated at 14-17 CFU/kg. Validation showed good agreement between observed and predicted prevalence (error = -2.17%). The results showed that all media were best at ruling in L. monocytogenes presence than ruling it out. Sensitivity and specificity varied depending on the culture-dependent method. None of the culture media was perfect in detecting L. monocytogenes in minced pork meat alone. The use of at least two culture media in parallel enhanced the efficiency of L. monocytogenes detection. Bayesian modeling may reduce the time needed to draw conclusions regarding L. monocytogenes presence and the uncertainty of the results obtained. Furthermore, the problem of observing zero counts may be overcome by applying Bayesian analysis, making the determination of a test performance feasible. Copyright © 2013 Elsevier Ltd. All rights reserved.