Sample records for bayesian confidence intervals

  1. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  2. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  3. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  4. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  5. Credible occurrence probabilities for extreme geophysical events: earthquakes, volcanic eruptions, magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.

    2012-01-01

    Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.

  6. Power in Bayesian Mediation Analysis for Small Sample Research

    PubMed Central

    Miočević, Milica; MacKinnon, David P.; Levy, Roy

    2018-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296

  7. Power in Bayesian Mediation Analysis for Small Sample Research.

    PubMed

    Miočević, Milica; MacKinnon, David P; Levy, Roy

    2017-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.

  8. Monitoring Human Development Goals: A Straightforward (Bayesian) Methodology for Cross-National Indices

    ERIC Educational Resources Information Center

    Abayomi, Kobi; Pizarro, Gonzalo

    2013-01-01

    We offer a straightforward framework for measurement of progress, across many dimensions, using cross-national social indices, which we classify as linear combinations of multivariate country level data onto a univariate score. We suggest a Bayesian approach which yields probabilistic (confidence type) intervals for the point estimates of country…

  9. Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models

    PubMed Central

    2014-01-01

    Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829

  10. Significance testing - are we ready yet to abandon its use?

    PubMed

    The, Bertram

    2011-11-01

    Understanding of the damaging effects of significance testing has steadily grown. Reporting p values without dichotomizing the result to be significant or not, is not the solution. Confidence intervals are better, but are troubled by a non-intuitive interpretation, and are often misused just to see whether the null value lies within the interval. Bayesian statistics provide an alternative which solves most of these problems. Although criticized for relying on subjective models, the interpretation of a Bayesian posterior probability is more intuitive than the interpretation of a p value, and seems to be closest to intuitive patterns of human decision making. Another alternative could be using confidence interval functions (or p value functions) to display a continuum of intervals at different levels of confidence around a point estimate. Thus, better alternatives to significance testing exist. The reluctance to abandon this practice might be both preference of clinging to old habits as well as the unfamiliarity with better methods. Authors might question if using less commonly exercised, though superior, techniques will be well received by the editors, reviewers and the readership. A joint effort will be needed to abandon significance testing in clinical research in the future.

  11. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  12. Joint distribution approaches to simultaneously quantifying benefit and risk.

    PubMed

    Shaffer, Michele L; Watterberg, Kristi L

    2006-10-12

    The benefit-risk ratio has been proposed to measure the tradeoff between benefits and risks of two therapies for a single binary measure of efficacy and a single adverse event. The ratio is calculated from the difference in risk and difference in benefit between therapies. Small sample sizes or expected differences in benefit or risk can lead to no solution or problematic solutions for confidence intervals. Alternatively, using the joint distribution of benefit and risk, confidence regions for the differences in risk and benefit can be constructed in the benefit-risk plane. The information in the joint distribution can be summarized by choosing regions of interest in this plane. Using Bayesian methodology provides a very flexible framework for summarizing information in the joint distribution. Data from a National Institute of Child Health & Human Development trial of hydrocortisone illustrate the construction of confidence regions and regions of interest in the benefit-risk plane, where benefit is survival without supplemental oxygen at 36 weeks postmenstrual age, and risk is gastrointestinal perforation. For the subgroup of infants exposed to chorioamnionitis the confidence interval based on the benefit-risk ratio is wide (Benefit-risk ratio: 1.52; 90% confidence interval: 0.23 to 5.25). Choosing regions of appreciable risk and acceptable risk in the benefit-risk plane confirms the uncertainty seen in the wide confidence interval for the benefit-risk ratio--there is a greater than 50% chance of falling into the region of acceptable risk--while visually allowing the uncertainty in risk and benefit to be shown separately. Applying Bayesian methodology, an incremental net health benefit analysis shows there is a 72% chance of having a positive incremental net benefit if hydrocortisone is used in place of placebo if one is willing to incur at most one gastrointestinal perforation for each additional infant that survives without supplemental oxygen. If the benefit-risk ratio is presented, the joint distribution of benefit and risk also should be shown. These regions avoid the ambiguity associated with collapsing benefit and risk to a single dimension. Bayesian methods allow even greater flexibility in simultaneously quantifying benefit and risk.

  13. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  14. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  15. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    PubMed Central

    Wiens, Stefan; Nilsson, Mats E.

    2016-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes? PMID:29805179

  16. Author Correction: Phase-resolved X-ray polarimetry of the Crab pulsar with the AstroSat CZT Imager

    NASA Astrophysics Data System (ADS)

    Vadawale, S. V.; Chattopadhyay, T.; Mithun, N. P. S.; Rao, A. R.; Bhattacharya, D.; Vibhute, A.; Bhalerao, V. B.; Dewangan, G. C.; Misra, R.; Paul, B.; Basu, A.; Joshi, B. C.; Sreekumar, S.; Samuel, E.; Priya, P.; Vinod, P.; Seetha, S.

    2018-05-01

    In the Supplementary Information file originally published for this Letter, in Supplementary Fig. 7 the error bars for the polarization fraction were provided as confidence intervals but instead should have been Bayesian credibility intervals. This has been corrected and does not alter the conclusions of the Letter in any way.

  17. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  18. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  19. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    2001-10-01

    - The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  20. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  1. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.

  2. Bayesian characterization of uncertainty in species interaction strengths.

    PubMed

    Wolf, Christopher; Novak, Mark; Gitelman, Alix I

    2017-06-01

    Considerable effort has been devoted to the estimation of species interaction strengths. This effort has focused primarily on statistical significance testing and obtaining point estimates of parameters that contribute to interaction strength magnitudes, leaving the characterization of uncertainty associated with those estimates unconsidered. We consider a means of characterizing the uncertainty of a generalist predator's interaction strengths by formulating an observational method for estimating a predator's prey-specific per capita attack rates as a Bayesian statistical model. This formulation permits the explicit incorporation of multiple sources of uncertainty. A key insight is the informative nature of several so-called non-informative priors that have been used in modeling the sparse data typical of predator feeding surveys. We introduce to ecology a new neutral prior and provide evidence for its superior performance. We use a case study to consider the attack rates in a New Zealand intertidal whelk predator, and we illustrate not only that Bayesian point estimates can be made to correspond with those obtained by frequentist approaches, but also that estimation uncertainty as described by 95% intervals is more useful and biologically realistic using the Bayesian method. In particular, unlike in bootstrap confidence intervals, the lower bounds of the Bayesian posterior intervals for attack rates do not include zero when a predator-prey interaction is in fact observed. We conclude that the Bayesian framework provides a straightforward, probabilistic characterization of interaction strength uncertainty, enabling future considerations of both the deterministic and stochastic drivers of interaction strength and their impact on food webs.

  3. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    PubMed

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

  4. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  5. Bayesian Methods and Confidence Intervals for Automatic Target Recognition of SAR Canonical Shapes

    DTIC Science & Technology

    2014-03-27

    and DirectX [22]. The CUDA platform was developed by the NVIDIA Corporation to allow programmers access to the computational capabilities of the...were used for the intense repetitive computations. Developing CUDA software requires writing code for specialized compilers provided by NVIDIA and

  6. Data free inference with processed data products

    DOE PAGES

    Chowdhary, K.; Najm, H. N.

    2014-07-12

    Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.

  7. Demography and population status of polar bears in western Hudson Bay

    USGS Publications Warehouse

    Lunn, Nicholas J.; Regher, Eric V; Servanty, Sabrina; Converse, Sarah J.; Richardson, Evan S.; Stirling, Ian

    2013-01-01

    The 2011 abundance estimate from this analysis was 806 bears with a 95% Bayesian credible interval of 653-984. This is lower than, but broadly consistent with, the abundance estimate of 1,030 (95% confidence interval = 745-1406) from a 2011 aerial survey (Stapleton et al. 2014). The capture-recapture and aerial survey approaches have different spatial and temporal coverage of the WH subpopulation and, consequently, the effective study population considered by each approach is different.

  8. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    PubMed

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  9. A Bayesian Analysis of a Randomized Clinical Trial Comparing Antimetabolite Therapies for Non-Infectious Uveitis.

    PubMed

    Browne, Erica N; Rathinam, Sivakumar R; Kanakath, Anuradha; Thundikandy, Radhika; Babu, Manohar; Lietman, Thomas M; Acharya, Nisha R

    2017-02-01

    To conduct a Bayesian analysis of a randomized clinical trial (RCT) for non-infectious uveitis using expert opinion as a subjective prior belief. A RCT was conducted to determine which antimetabolite, methotrexate or mycophenolate mofetil, is more effective as an initial corticosteroid-sparing agent for the treatment of intermediate, posterior, and pan-uveitis. Before the release of trial results, expert opinion on the relative effectiveness of these two medications was collected via online survey. Members of the American Uveitis Society executive committee were invited to provide an estimate for the relative decrease in efficacy with a 95% credible interval (CrI). A prior probability distribution was created from experts' estimates. A Bayesian analysis was performed using the constructed expert prior probability distribution and the trial's primary outcome. A total of 11 of the 12 invited uveitis specialists provided estimates. Eight of 11 experts (73%) believed mycophenolate mofetil is more effective. The group prior belief was that the odds of treatment success for patients taking mycophenolate mofetil were 1.4-fold the odds of those taking methotrexate (95% CrI 0.03-45.0). The odds of treatment success with mycophenolate mofetil compared to methotrexate was 0.4 from the RCT (95% confidence interval 0.1-1.2) and 0.7 (95% CrI 0.2-1.7) from the Bayesian analysis. A Bayesian analysis combining expert belief with the trial's result did not indicate preference for one drug. However, the wide credible interval leaves open the possibility of a substantial treatment effect. This suggests clinical equipoise necessary to allow a larger, more definitive RCT.

  10. Hypothesis Testing, "p" Values, Confidence Intervals, Measures of Effect Size, and Bayesian Methods in Light of Modern Robust Techniques

    ERIC Educational Resources Information Center

    Wilcox, Rand R.; Serang, Sarfaraz

    2017-01-01

    The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…

  11. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    PubMed

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  12. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  13. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  14. Pocket Handbook on Reliability

    DTIC Science & Technology

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  15. Confidence Intervals for the Between-Study Variance in Random Effects Meta-Analysis Using Generalised Cochran Heterogeneity Statistics

    ERIC Educational Resources Information Center

    Jackson, Dan

    2013-01-01

    Statistical inference is problematic in the common situation in meta-analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and…

  16. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  17. A Bayesian Analysis of a Randomized Clinical Trial Comparing Antimetabolite Therapies for Non-Infectious Uveitis

    PubMed Central

    Browne, Erica N; Rathinam, Sivakumar R; Kanakath, Anuradha; Thundikandy, Radhika; Babu, Manohar; Lietman, Thomas M; Acharya, Nisha R

    2017-01-01

    Purpose To conduct a Bayesian analysis of a randomized clinical trial (RCT) for non-infectious uveitis using expert opinion as a subjective prior belief. Methods A RCT was conducted to determine which antimetabolite, methotrexate or mycophenolate mofetil, is more effective as an initial corticosteroid-sparing agent for the treatment of intermediate, posterior, and pan- uveitis. Before the release of trial results, expert opinion on the relative effectiveness of these two medications was collected via online survey. Members of the American Uveitis Society executive committee were invited to provide an estimate for the relative decrease in efficacy with a 95% credible interval (CrI). A prior probability distribution was created from experts’ estimates. A Bayesian analysis was performed using the constructed expert prior probability distribution and the trial’s primary outcome. Results 11 of 12 invited uveitis specialists provided estimates. Eight of 11 experts (73%) believed mycophenolate mofetil is more effective. The group prior belief was that the odds of treatment success for patients taking mycophenolate mofetil were 1.4-fold the odds of those taking methotrexate (95% CrI 0.03 – 45.0). The odds of treatment success with mycophenolate mofetil compared to methotrexate was 0.4 from the RCT (95% confidence interval 0.1–1.2) and 0.7 (95% CrI 0.2–1.7) from the Bayesian analysis. Conclusions A Bayesian analysis combining expert belief with the trial’s result did not indicate preference for one drug. However, the wide credible interval leaves open the possibility of a substantial treatment effect. This suggests clinical equipoise necessary to allow a larger, more definitive RCT. PMID:27982726

  18. Hypertensive disorders during pregnancy and risk of type 2 diabetes in later life: a systematic review and meta-analysis.

    PubMed

    Wang, Zengfang; Wang, Zengyan; Wang, Luang; Qiu, Mingyue; Wang, Yangang; Hou, Xu; Guo, Zhong; Wang, Bin

    2017-03-01

    Many studies assessed the association between hypertensive disorders during pregnancy and risk of type 2 diabetes mellitus in later life, but contradictory findings were reported. A systemic review and meta-analysis was carried out to elucidate type 2 diabetes mellitus risk in women with hypertensive disorders during pregnancy. Pubmed, Embase, and Web of Science were searched for cohort or case-control studies on the association between hypertensive disorders during pregnancy and subsequent type 2 diabetes mellitus. Random-effect model was used to pool risk estimates. Bayesian meta-analysis was carried out to further estimate the type 2 diabetes mellitus risk associated with hypertensive disorders during pregnancy. Seventeen cohort or prospective matched case-control studies were finally included. Those 17 studies involved 2,984,634 women and 46,732 type 2 diabetes mellitus cases. Overall, hypertensive disorders during pregnancy were significantly correlated with type 2 diabetes mellitus risk (relative risk = 1.56, 95 % confidence interval 1.21-2.01, P = 0.001). Preeclampsia was significantly and independently correlated with type 2 diabetes mellitus risk (relative risk = 2.25, 95 % confidence interval 1.73-2.90, P < 0.001). In addition, gestational hypertension was also significantly and independently correlated with subsequent type 2 diabetes mellitus risk (relative risk = 2.06, 95 % confidence interval 1.57-2.69, P < 0.001). The pooled estimates were not significantly altered in the subgroup analyses of studies on preeclampsia or gestational hypertension. Bayesian meta-analysis showed the relative risks of type 2 diabetes mellitus risk for individuals with hypertensive disorders during pregnancy, preeclampsia, and gestational hypertension were 1.59 (95 % credibility interval: 1.11-2.32), 2.27 (95 % credibility interval: 1.67-2.97), and 2.06 (95 % credibility interval: 1.41-2.84), respectively. Publication bias was not evident in the meta-analysis. Preeclampsia and gestational hypertension are independently associated with substantially elevated risk of type 2 diabetes mellitus in later life.

  19. A multiscale Bayesian data integration approach for mapping air dose rates around the Fukushima Daiichi Nuclear Power Plant.

    PubMed

    Wainwright, Haruko M; Seki, Akiyuki; Chen, Jinsong; Saito, Kimiaki

    2017-02-01

    This paper presents a multiscale data integration method to estimate the spatial distribution of air dose rates in the regional scale around the Fukushima Daiichi Nuclear Power Plant. We integrate various types of datasets, such as ground-based walk and car surveys, and airborne surveys, all of which have different scales, resolutions, spatial coverage, and accuracy. This method is based on geostatistics to represent spatial heterogeneous structures, and also on Bayesian hierarchical models to integrate multiscale, multi-type datasets in a consistent manner. The Bayesian method allows us to quantify the uncertainty in the estimates, and to provide the confidence intervals that are critical for robust decision-making. Although this approach is primarily data-driven, it has great flexibility to include mechanistic models for representing radiation transport or other complex correlations. We demonstrate our approach using three types of datasets collected at the same time over Fukushima City in Japan: (1) coarse-resolution airborne surveys covering the entire area, (2) car surveys along major roads, and (3) walk surveys in multiple neighborhoods. Results show that the method can successfully integrate three types of datasets and create an integrated map (including the confidence intervals) of air dose rates over the domain in high resolution. Moreover, this study provides us with various insights into the characteristics of each dataset, as well as radiocaesium distribution. In particular, the urban areas show high heterogeneity in the contaminant distribution due to human activities as well as large discrepancy among different surveys due to such heterogeneity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  1. Bayesian tomography and integrated data analysis in fusion diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong, E-mail: lid@swip.ac.cn; Dong, Y. B.; Deng, Wei

    2016-11-15

    In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varyingmore » smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.« less

  2. Planck intermediate results. XVI. Profile likelihoods for cosmological parameters

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bonaldi, A.; Bond, J. R.; Bouchet, F. R.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Liddle, A.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski∗, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rouillé d'Orfeuil, B.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Savelainen, M.; Savini, G.; Spencer, L. D.; Spinelli, M.; Starck, J.-L.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-06-01

    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the ΛCDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agreement with the cosmological results from the Bayesian framework is excellent, demonstrating the robustness of the Planck results to the statistical methodology. We investigate the inclusion of neutrino masses, where more significant differences may appear due to the non-Gaussian nature of the posterior mass distribution. By applying the Feldman-Cousins prescription, we again obtain results very similar to those of the Bayesian methodology. However, the profile-likelihood analysis of the cosmic microwave background (CMB) combination (Planck+WP+highL) reveals a minimum well within the unphysical negative-mass region. We show that inclusion of the Planck CMB-lensing information regularizes this issue, and provide a robust frequentist upper limit ∑ mν ≤ 0.26 eV (95% confidence) from the CMB+lensing+BAO data combination.

  3. New-generation stents compared with coronary bypass surgery for unprotected left main disease: A word of caution.

    PubMed

    Benedetto, Umberto; Taggart, David P; Sousa-Uva, Miguel; Biondi-Zoccai, Giuseppe; Di Franco, Antonino; Ohmes, Lucas B; Rahouma, Mohamed; Kamel, Mohamed; Caputo, Massimo; Girardi, Leonard N; Angelini, Gianni D; Gaudino, Mario

    2018-05-01

    With the advent of bare metal stents and drug-eluting stents, percutaneous coronary intervention has emerged as an alternative to coronary artery bypass grafting surgery for unprotected left main disease. However, whether the evolution of stents technology has translated into better results after percutaneous coronary intervention remains unclear. We aimed to compare coronary artery bypass grafting with stents of different generations for left main disease by performing a Bayesian network meta-analysis of available randomized controlled trials. All randomized controlled trials with at least 1 arm randomized to percutaneous coronary intervention with stents or coronary artery bypass grafting for left main disease were included. Bare metal stents and drug-eluting stents of first- and second-generation were compared with coronary artery bypass grafting. Poisson methods and Bayesian framework were used to compute the head-to-head incidence rate ratio and 95% credible intervals. Primary end points were the composite of death/myocardial infarction/stroke and repeat revascularization. Nine randomized controlled trials were included in the final analysis. Six trials compared percutaneous coronary intervention with coronary artery bypass grafting (n = 4654), and 3 trials compared different types of stents (n = 1360). Follow-up ranged from 6 months to 5 years. Second-generation drug-eluting stents (incidence rate ratio, 1.3; 95% credible interval, 1.1-1.6), but not bare metal stents (incidence rate ratio, 0.63; 95% credible interval, 0.27-1.4), and first-generation drug-eluting stents (incidence rate ratio, 0.85; 95% credible interval, 0.65-1.1) were associated with a significantly increased risk of death/myocardial infarction/stroke when compared with coronary artery bypass grafting. When compared with coronary artery bypass grafting, the highest risk of repeat revascularization was observed for bare metal stents (hazard ratio, 5.1; 95% confidence interval, 2.1-14), whereas first-generation drug-eluting stents (incidence rate ratio, 1.8; 95% confidence interval, 1.4-2.4) and second-generation drug-eluting stents (incidence rate ratio, 1.8; 95% confidence interval, 1.4-2.4) were comparable. The introduction of new-generation drug-eluting stents did not translate into better outcomes for percutaneous coronary intervention when compared with coronary artery bypass grafting. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  4. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  5. RadVel: General toolkit for modeling Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-01-01

    RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.

  6. Bayesian statistical inference enhances the interpretation of contemporary randomized controlled trials.

    PubMed

    Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas

    2009-01-01

    Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.

  7. Proximity to mining industry and respiratory diseases in children in a community in Northern Chile: A cross-sectional study.

    PubMed

    Herrera, Ronald; Radon, Katja; von Ehrenstein, Ondine S; Cifuentes, Stella; Muñoz, Daniel Moraga; Berger, Ursula

    2016-06-07

    In a community in northern Chile, explosive procedures are used by two local industrial mines (gold, copper). We hypothesized that the prevalence of asthma and rhinoconjunctivitis in the community may be associated with air pollution emissions generated by the mines. A cross-sectional study of 288 children (aged 6-15 years) was conducted in a community in northern Chile using a validated questionnaire in 2009. The proximity between each child's place of residence and the mines was assessed as indicator of exposure to mining related air pollutants. Logistic regression, semiparametric models and spatial Bayesian models with a parametric form for distance were used to calculate odds ratios and 95 % confidence intervals. The prevalence of asthma and rhinoconjunctivitis was 24 and 34 %, respectively. For rhinoconjunctivitis, the odds ratio for average distance between both mines and child's residence was 1.72 (95 % confidence interval 1.00, 3.04). The spatial Bayesian models suggested a considerable increase in the risk for respiratory diseases closer to the mines, and only beyond a minimum distance of more than 1800 m the health impact was considered to be negligible. The findings indicate that air pollution emissions related to industrial gold or copper mines mainly occurring in rural Chilean communities might increase the risk of respiratory diseases in children.

  8. A solution to the static frame validation challenge problem using Bayesian model selection

    DOE PAGES

    Grigoriu, M. D.; Field, R. V.

    2007-12-23

    Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for themore » material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.« less

  9. BELM: Bayesian extreme learning machine.

    PubMed

    Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J

    2011-03-01

    The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.

  10. Bayes in biological anthropology.

    PubMed

    Konigsberg, Lyle W; Frankenberg, Susan R

    2013-12-01

    In this article, we both contend and illustrate that biological anthropologists, particularly in the Americas, often think like Bayesians but act like frequentists when it comes to analyzing a wide variety of data. In other words, while our research goals and perspectives are rooted in probabilistic thinking and rest on prior knowledge, we often proceed to use statistical hypothesis tests and confidence interval methods unrelated (or tenuously related) to the research questions of interest. We advocate for applying Bayesian analyses to a number of different bioanthropological questions, especially since many of the programming and computational challenges to doing so have been overcome in the past two decades. To facilitate such applications, this article explains Bayesian principles and concepts, and provides concrete examples of Bayesian computer simulations and statistics that address questions relevant to biological anthropology, focusing particularly on bioarchaeology and forensic anthropology. It also simultaneously reviews the use of Bayesian methods and inference within the discipline to date. This article is intended to act as primer to Bayesian methods and inference in biological anthropology, explaining the relationships of various methods to likelihoods or probabilities and to classical statistical models. Our contention is not that traditional frequentist statistics should be rejected outright, but that there are many situations where biological anthropology is better served by taking a Bayesian approach. To this end it is hoped that the examples provided in this article will assist researchers in choosing from among the broad array of statistical methods currently available. Copyright © 2013 Wiley Periodicals, Inc.

  11. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  12. Measurement of neutrino and antineutrino oscillations by the T2K experiment including a new additional sample of νe interactions at the far detector

    NASA Astrophysics Data System (ADS)

    Abe, K.; Amey, J.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Ashida, Y.; Ban, S.; Barbi, M.; Barker, G. J.; Barr, G.; Barry, C.; Batkiewicz, M.; Berardi, V.; Berkman, S.; Bhadra, S.; Bienstock, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Campbell, T.; Cao, S.; Cartwright, S. L.; Catanesi, M. G.; Cervera, A.; Chappell, A.; Checchia, C.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Coleman, J.; Collazuol, G.; Coplowe, D.; Cudd, A.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Di Lodovico, F.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dunne, P.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Fiorillo, G.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Garcia, A.; Giganti, C.; Gizzarelli, F.; Golan, T.; Gonin, M.; Hadley, D. R.; Haegel, L.; Haigh, J. T.; Hansen, D.; Harada, J.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Hillairet, A.; Hiraki, T.; Hiramoto, A.; Hirota, S.; Hogan, M.; Holeczek, J.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Koller, P. P.; Konaka, A.; Kormos, L. L.; Koshio, Y.; Kowalik, K.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Lamoureux, M.; Lasorak, P.; Laveder, M.; Lawe, M.; Licciardi, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Lou, T.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Maret, L.; Marino, A. D.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Morrison, J.; Mueller, Th. A.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakanishi, Y.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nishikawa, K.; Nishimura, Y.; Novella, P.; Nowak, J.; O'Keeffe, H. M.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Paudyal, P.; Pavin, M.; Payne, D.; Petrov, Y.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Pritchard, A.; Przewlocki, P.; Quilain, B.; Radermacher, T.; Radicioni, E.; Ratoff, P. N.; Rayner, M. A.; Reinherz-Aronis, E.; Riccio, C.; Rodrigues, P. A.; Rondio, E.; Rossi, B.; Roth, S.; Ruggeri, A. C.; Rychter, A.; Sakashita, K.; Sánchez, F.; Scantamburlo, E.; Scholberg, K.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Steinmann, J.; Stewart, T.; Stowell, P.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takeda, A.; Takeuchi, Y.; Tamura, R.; Tanaka, H. K.; Tanaka, H. A.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Tsukamoto, T.; Tzanov, M.; Vagins, M.; Vallari, Z.; Vasseur, G.; Vilela, C.; Vladisavljevic, T.; Wachala, T.; Walter, C. W.; Wark, D.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Wret, C.; Yamada, Y.; Yamamoto, K.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; T2K Collaboration

    2017-11-01

    The T2K experiment reports an updated analysis of neutrino and antineutrino oscillations in appearance and disappearance channels. A sample of electron neutrino candidates at Super-Kamiokande in which a pion decay has been tagged is added to the four single-ring samples used in previous T2K oscillation analyses. Through combined analyses of these five samples, simultaneous measurements of four oscillation parameters, |Δ m322 |, sin2θ23, sin2θ13, and δCP and of the mass ordering are made. A set of studies of simulated data indicates that the sensitivity to the oscillation parameters is not limited by neutrino interaction model uncertainty. Multiple oscillation analyses are performed, and frequentist and Bayesian intervals are presented for combinations of the oscillation parameters with and without the inclusion of reactor constraints on sin2θ13. When combined with reactor measurements, the hypothesis of C P conservation (δCP=0 or π ) is excluded at 90% confidence level. The 90% confidence region for δCP is [-2.95 ,-0.44 ] ([-1.47 ,-1.27 ] ) for normal (inverted) ordering. The central values and 68% confidence intervals for the other oscillation parameters for normal (inverted) ordering are Δ m322=2.54 ±0.08 (2.51 ±0.08 )×10-3 eV2/c4 and sin2θ23 =0.5 5-0.09+0.05 (0.5 5-0.08+0.05), compatible with maximal mixing. In the Bayesian analysis, the data weakly prefer normal ordering (Bayes factor 3.7) and the upper octant for sin2θ23 (Bayes factor 2.4).

  13. Robust power spectral estimation for EEG data

    PubMed Central

    Melman, Tamar; Victor, Jonathan D.

    2016-01-01

    Background Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. New method Using the multitaper method[1] as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Results Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. Comparison to existing method The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. Conclusion In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. PMID:27102041

  14. Robust power spectral estimation for EEG data.

    PubMed

    Melman, Tamar; Victor, Jonathan D

    2016-08-01

    Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Estimations of evapotranspiration and water balance with uncertainty over the Yukon River Basin

    USGS Publications Warehouse

    Yuan, Wenping; Liu, Shuguang; Liang, Shunlin; Tan, Zhengxi; Liu, Heping; Young, Claudia

    2012-01-01

    In this study, the revised Remote Sensing-Penman Monteith model (RS-PM) was used to scale up evapotranspiration (ET) over the entire Yukon River Basin (YRB) from three eddy covariance (EC) towers covering major vegetation types. We determined model parameters and uncertainty using a Bayesian-based method in the three EC sites. The 95 % confidence interval for the aggregate ecosystem ET ranged from 233 to 396 mm yr−1 with an average of 319 mm yr−1. The mean difference between precipitation and evapotranspiration (W) was 171 mm yr−1 with a 95 % confidence interval of 94–257 mm yr−1. The YRB region showed a slight increasing trend in annual precipitation for the 1982–2009 time period, while ET showed a significant increasing trend of 6.6 mm decade−1. As a whole, annual W showed a drying trend over YRB region.

  16. Confidence as Bayesian Probability: From Neural Origins to Behavior.

    PubMed

    Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F

    2015-10-07

    Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Bayesian statistics in radionuclide metrology: measurement of a decaying source

    NASA Astrophysics Data System (ADS)

    Bochud, François O.; Bailat, Claude J.; Laedermann, Jean-Pascal

    2007-08-01

    The most intuitive way of defining a probability is perhaps through the frequency at which it appears when a large number of trials are realized in identical conditions. The probability derived from the obtained histogram characterizes the so-called frequentist or conventional statistical approach. In this sense, probability is defined as a physical property of the observed system. By contrast, in Bayesian statistics, a probability is not a physical property or a directly observable quantity, but a degree of belief or an element of inference. The goal of this paper is to show how Bayesian statistics can be used in radionuclide metrology and what its advantages and disadvantages are compared with conventional statistics. This is performed through the example of an yttrium-90 source typically encountered in environmental surveillance measurement. Because of the very low activity of this kind of source and the small half-life of the radionuclide, this measurement takes several days, during which the source decays significantly. Several methods are proposed to compute simultaneously the number of unstable nuclei at a given reference time, the decay constant and the background. Asymptotically, all approaches give the same result. However, Bayesian statistics produces coherent estimates and confidence intervals in a much smaller number of measurements. Apart from the conceptual understanding of statistics, the main difficulty that could deter radionuclide metrologists from using Bayesian statistics is the complexity of the computation.

  18. A Test by Any Other Name: P Values, Bayes Factors, and Statistical Inference.

    PubMed

    Stern, Hal S

    2016-01-01

    Procedures used for statistical inference are receiving increased scrutiny as the scientific community studies the factors associated with insuring reproducible research. This note addresses recent negative attention directed at p values, the relationship of confidence intervals and tests, and the role of Bayesian inference and Bayes factors, with an eye toward better understanding these different strategies for statistical inference. We argue that researchers and data analysts too often resort to binary decisions (e.g., whether to reject or accept the null hypothesis) in settings where this may not be required.

  19. Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files

    DOE Data Explorer

    John Shervais

    2015-10-09

    Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.

  20. Estimating the extent and distribution of new-onset adult asthma in British Columbia using frequentist and Bayesian approaches.

    PubMed

    Beach, Jeremy; Burstyn, Igor; Cherry, Nicola

    2012-07-01

    We previously described a method to identify the incidence of new-onset adult asthma (NOAA) in Alberta by industry and occupation, utilizing Workers' Compensation Board (WCB) and physician billing data. The aim of this study was to extend this method to data from British Columbia (BC) so as to compare the two provinces and to incorporate Bayesian methodology into estimates of risk. WCB claims for any reason 1995-2004 were linked to physician billing data. NOAA was defined as a billing for asthma (ICD-9 493) in the 12 months before a WCB claim without asthma in the previous 3 years. Incidence was calculated by occupation and industry. In a matched case-referent analysis, associations with exposures were examined using an asthma-specific job exposure matrix (JEM). Posterior distributions from the Alberta analysis and estimated misclassification parameters were used as priors in the Bayesian analysis of the BC data. Among 1 118 239 eligible WCB claims the incidence of NOAA was 1.4%. Sixteen occupations and 44 industries had a significantly increased risk; six industries had a decreased risk. The JEM identified wood dust [odds ratio (OR) 1.55, 95% confidence interval (CI) 1.08-2.24] and animal antigens (OR 1.66, 95% CI 1.17-2.36) as related to an increased risk of NOAA. Exposure to isocyanates was associated with decreased risk (OR 0.57, 95% CI 0.39-0.85). Bayesian analyses taking account of exposure misclassification and informative priors resulted in posterior distributions of ORs with lower boundary of 95% credible intervals >1.00 for almost all exposures. The distribution of NOAA in BC appeared somewhat similar to that in Alberta, except for isocyanates. Bayesian analyses allowed incorporation of prior evidence into risk estimates, permitting reconsideration of the apparently protective effect of isocyanate exposure.

  1. Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis

    NASA Astrophysics Data System (ADS)

    Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles

    2018-03-01

    We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.

  2. Receptive Field Inference with Localized Priors

    PubMed Central

    Park, Mijung; Pillow, Jonathan W.

    2011-01-01

    The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets. PMID:22046110

  3. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  4. Confidence of compliance: a Bayesian approach for percentile standards.

    PubMed

    McBride, G B; Ellis, J C

    2001-04-01

    Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.

  5. Bayesian latent class estimation of the incidence of chest radiograph-confirmed pneumonia in rural Thailand.

    PubMed

    Lu, Y; Baggett, H C; Rhodes, J; Thamthitiwat, S; Joseph, L; Gregory, C J

    2016-10-01

    Pneumonia is a leading cause of mortality and morbidity worldwide with radiographically confirmed pneumonia a key disease burden indicator. This is usually determined by a radiology panel which is assumed to be the best available standard; however, this assumption may introduce bias into pneumonia incidence estimates. To improve estimates of radiographic pneumonia incidence, we applied Bayesian latent class modelling (BLCM) to a large database of hospitalized patients with acute lower respiratory tract illness in Sa Kaeo and Nakhon Phanom provinces, Thailand from 2005 to 2010 with chest radiographs read by both a radiology panel and a clinician. We compared these estimates to those from conventional analysis. For children aged <5 years, estimated radiographically confirmed pneumonia incidence by BLCM was 2394/100 000 person-years (95% credible interval 2185-2574) vs. 1736/100 000 person-years (95% confidence interval 1706-1766) from conventional analysis. For persons aged ⩾5 years, estimated radiographically confirmed pneumonia incidence was similar between BLCM and conventional analysis (235 vs. 215/100 000 person-years). BLCM suggests the incidence of radiographically confirmed pneumonia in young children is substantially larger than estimated from the conventional approach using radiology panels as the reference standard.

  6. Neutron multiplicity counting: Confidence intervals for reconstruction parameters

    DOE PAGES

    Verbeke, Jerome M.

    2016-03-09

    From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less

  7. Internal Medicine residents use heuristics to estimate disease probability.

    PubMed

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.

  8. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  9. Inverse and forward modeling under uncertainty using MRE-based Bayesian approach

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Rubin, Y.

    2004-12-01

    A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.

  10. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  11. Reward maximization justifies the transition from sensory selection at childhood to sensory integration at adulthood.

    PubMed

    Daee, Pedram; Mirian, Maryam S; Ahmadabadi, Majid Nili

    2014-01-01

    In a multisensory task, human adults integrate information from different sensory modalities--behaviorally in an optimal Bayesian fashion--while children mostly rely on a single sensor modality for decision making. The reason behind this change of behavior over age and the process behind learning the required statistics for optimal integration are still unclear and have not been justified by the conventional Bayesian modeling. We propose an interactive multisensory learning framework without making any prior assumptions about the sensory models. In this framework, learning in every modality and in their joint space is done in parallel using a single-step reinforcement learning method. A simple statistical test on confidence intervals on the mean of reward distributions is used to select the most informative source of information among the individual modalities and the joint space. Analyses of the method and the simulation results on a multimodal localization task show that the learning system autonomously starts with sensory selection and gradually switches to sensory integration. This is because, relying more on modalities--i.e. selection--at early learning steps (childhood) is more rewarding than favoring decisions learned in the joint space since, smaller state-space in modalities results in faster learning in every individual modality. In contrast, after gaining sufficient experiences (adulthood), the quality of learning in the joint space matures while learning in modalities suffers from insufficient accuracy due to perceptual aliasing. It results in tighter confidence interval for the joint space and consequently causes a smooth shift from selection to integration. It suggests that sensory selection and integration are emergent behavior and both are outputs of a single reward maximization process; i.e. the transition is not a preprogrammed phenomenon.

  12. Inclusion of historical information in flood frequency analysis using a Bayesian MCMC technique: a case study for the power dam Orlík, Czech Republic

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Szolgay, Ján; Kohnová, Silvia; Hlavčová, Kamila; Viglione, Alberto

    2010-01-01

    The paper deals with at-site flood frequency estimation in the case when also information on hydrological events from the past with extraordinary magnitude are available. For the joint frequency analysis of systematic observations and historical data, respectively, the Bayesian framework is chosen, which, through adequately defined likelihood functions, allows for incorporation of different sources of hydrological information, e.g., maximum annual flood peaks, historical events as well as measurement errors. The distribution of the parameters of the fitted distribution function and the confidence intervals of the flood quantiles are derived by means of the Markov chain Monte Carlo simulation (MCMC) technique. The paper presents a sensitivity analysis related to the choice of the most influential parameters of the statistical model, which are the length of the historical period h and the perception threshold X0. These are involved in the statistical model under the assumption that except for the events termed as ‘historical’ ones, none of the (unknown) peak discharges from the historical period h should have exceeded the threshold X0. Both higher values of h and lower values of X0 lead to narrower confidence intervals of the estimated flood quantiles; however, it is emphasized that one should be prudent of selecting those parameters, in order to avoid making inferences with wrong assumptions on the unknown hydrological events having occurred in the past. The Bayesian MCMC methodology is presented on the example of the maximum discharges observed during the warm half year at the station Vltava-Kamýk (Czech Republic) in the period 1877-2002. Although the 2002 flood peak, which is related to the vast flooding that affected a large part of Central Europe at that time, occurred in the near past, in the analysis it is treated virtually as a ‘historical’ event in order to illustrate some crucial aspects of including information on extreme historical floods into at-site flood frequency analyses.

  13. Confidence Intervals for Laboratory Sonic Boom Annoyance Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Commercial supersonic flight is currently forbidden over land because sonic booms have historically caused unacceptable annoyance levels in overflown communities. NASA is providing data and expertise to noise regulators as they consider relaxing the ban for future quiet supersonic aircraft. One deliverable NASA will provide is a predictive model for indoor annoyance to aid in setting an acceptable quiet sonic boom threshold. A laboratory study was conducted to determine how indoor vibrations caused by sonic booms affect annoyance judgments. The test method required finding the point of subjective equality (PSE) between sonic boom signals that cause vibrations and signals not causing vibrations played at various amplitudes. This presentation focuses on a few statistical techniques for estimating the interval around the PSE. The techniques examined are the Delta Method, Parametric and Nonparametric Bootstrapping, and Bayesian Posterior Estimation.

  14. A Bayesian model averaging method for the derivation of reservoir operating rules

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai

    2015-09-01

    Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.

  15. Internal Medicine residents use heuristics to estimate disease probability

    PubMed Central

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Background Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. Method We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. Results When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Conclusions Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing. PMID:27004080

  16. Species delimitation of the Hyphydrus ovatus complex in western Palaearctic with an update of species distributions (Coleoptera, Dytiscidae)

    PubMed Central

    Bergsten, Johannes; Weingartner, Elisabeth; Hájek, Jiří

    2017-01-01

    Abstract The species status of Hyphydrus anatolicus Guignot, 1957 and H. sanctus Sharp, 1882, previously often confused with the widespread H. ovatus (Linnaeus, 1760), are tested with molecular and morphological characters. Cytochrome c oxidase subunit 1 (CO1) was sequenced for 32 specimens of all three species. Gene-trees were inferred with parsimony, time-free bayesian and strict clock bayesian analyses. The GMYC model was used to estimate species limits. All three species were reciprocally monophyletic with CO1 and highly supported. The GMYC species delimitation analysis unequivocally delimited the three species with no other than the three species solution included in the confidence interval. A likelihood ratio test rejected the one-species null model. Important morphological characters distinguishing the species are provided and illustrated. New distributional data are given for the following species: Hyphydrus anatolicus from Slovakia and Ukraine, and H. aubei Ganglbauer, 1891, and H. sanctus from Turkey. PMID:28769697

  17. Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations

    ERIC Educational Resources Information Center

    Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon

    2018-01-01

    To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…

  18. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman filter (EKF) can be used as an integrated adaptive learning and confidence interval estimation algorithm for neural networks, with fast convergence and small confidence intervals. However, EKF learning is computationally expensive because it involves high dimensional matrix manipulations. A modified U-D factorization within the decoupled EKF (DEKF-UD) framework is developed in this research. The computational efficiency and numerical stability are significantly improved.

  19. Timing and effect of a safe routes to school program on child pedestrian injury risk during school travel hours: Bayesian changepoint and difference-in-differences analysis.

    PubMed

    DiMaggio, Charles; Chen, Qixuan; Muennig, Peter A; Li, Guohua

    2014-12-01

    In 2005, the US Congress allocated $612 million for a national Safe Routes to School (SRTS) program to encourage walking and bicycling to schools. We evaluated the effectiveness of a SRTS in controlling pedestrian injuries among school-age children. Bayesian changepoint analysis was applied to model the quarterly counts of pedestrian injuries among 5- to 19-year old children in New York City between 2001 and 2010 during school-travel hours in census tracts with and without SRTS. Overdispersed Poisson model was used to estimate difference-in-differences in injury risk between census tracts with and without SRTS following the changepoint. In SRTS-intervention census tracts, a change point in the quarterly counts of injuries was identified in the second quarter of 2008, which was consistent with the timing of the implementation of SRTS interventions. In census tracts with SRTS interventions, the estimated quarterly rates of pedestrian injury per 10,000 population among school-age children during school-travel hours were 3.47 (95% Credible Interval [CrI] 2.67, 4.39) prior to the changepoint, and 0.74 (95% CrI 0.30, 1.50) after the changepoint. There was no change in the average number of quarterly injuries in non-SRTS census tracts. Overdispersed Poisson modeling revealed that SRTS implementation was associated with a 44% reduction (95% Confidence Interval [CI] 87% decrease to 130% increase) in school-age pedestrian injury risk during school-travel hours. Bayesian changepoint analysis of quarterly counts of school-age pedestrian injuries successfully identified the timing of SRTS intervention in New York City. Implementation of the SRTS program in New York City appears to be effective in reducing school-age pedestrian injuries during school-travel hours.

  20. Outdoor fine particles and nonfatal strokes: systematic review and meta-analysis.

    PubMed

    Shin, Hwashin H; Fann, Neal; Burnett, Richard T; Cohen, Aaron; Hubbell, Bryan J

    2014-11-01

    Epidemiologic studies find that long- and short-term exposure to fine particles (PM2.5) is associated with adverse cardiovascular outcomes, including ischemic and hemorrhagic strokes. However, few systematic reviews or meta-analyses have synthesized these results. We reviewed epidemiologic studies that estimated the risks of nonfatal strokes attributable to ambient PM2.5. To pool risks among studies we used a random-effects model and 2 Bayesian approaches. The first Bayesian approach assumes a normal prior that allows risks to be zero, positive or negative. The second assumes a gamma prior, where risks can only be positive. This second approach is proposed when the number of studies pooled is small, and there is toxicological or clinical literature to support a causal relation. We identified 20 studies suitable for quantitative meta-analysis. Evidence for publication bias is limited. The frequentist meta-analysis produced pooled risk ratios of 1.06 (95% confidence interval = 1.00-1.13) and 1.007 (1.003-1.010) for long- and short-term effects, respectively. The Bayesian meta-analysis found a posterior mean risk ratio of 1.08 (95% posterior interval = 0.96-1.26) and 1.008 (1.003-1.013) from a normal prior, and of 1.05 (1.02-1.10) and 1.008 (1.004-1.013) from a gamma prior, for long- and short-term effects, respectively, per 10 μg/m PM2.5. Sufficient evidence exists to develop a concentration-response relation for short- and long-term exposures to PM2.5 and stroke incidence. Long-term exposures to PM2.5 result in a higher risk ratio than short-term exposures, regardless of the pooling method. The evidence for short-term PM2.5-related ischemic stroke is especially strong.

  1. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  2. A measure of uncertainty regarding the interval constraint of normal mean elicited by two stages of a prior hierarchy.

    PubMed

    Kim, Hea-Jung

    2014-01-01

    This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  3. Bayesian Correction of Misclassification of Pertussis in Vaccine Effectiveness Studies: How Much Does Underreporting Matter?

    PubMed

    Goldstein, Neal D; Burstyn, Igor; Newbern, E Claire; Tabb, Loni P; Gutowski, Jennifer; Welles, Seth L

    2016-06-01

    Diagnosis of pertussis remains a challenge, and consequently research on the risk of disease might be biased because of misclassification. We quantified this misclassification and corrected for it in a case-control study of children in Philadelphia, Pennsylvania, who were 3 months to 6 years of age and diagnosed with pertussis between 2011 and 2013. Vaccine effectiveness (VE; calculated as (1 - odds ratio) × 100) was used to describe the average reduction in reported pertussis incidence resulting from persons being up to date on pertussis-antigen containing vaccines. Bayesian techniques were used to correct for purported nondifferential misclassification by reclassifying the cases per the 2014 Council of State and Territorial Epidemiologists pertussis case definition. Naïve VE was 50% (95% confidence interval: 16%, 69%). After correcting for misclassification, VE ranged from 57% (95% credible interval: 30, 73) to 82% (95% credible interval: 43, 95), depending on the amount of underreporting of pertussis that was assumed to have occurred in the study period. Meaningful misclassification was observed in terms of false negatives detected after the incorporation of infant apnea to the 2014 case definition. Although specificity was nearly perfect, sensitivity of the case definition varied from 90% to 20%, depending on the assumption about missed cases. Knowing the degree of the underreporting is essential to the accurate evaluation of VE. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.

    PubMed

    Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat

    2017-07-01

    Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.

  6. A history estimate and evolutionary analysis of rabies virus variants in China.

    PubMed

    Ming, Pinggang; Yan, Jiaxin; Rayner, Simon; Meng, Shengli; Xu, Gelin; Tang, Qing; Wu, Jie; Luo, Jing; Yang, Xiaoming

    2010-03-01

    To investigate the evolutionary dynamics of rabies virus (RABV) in China, we collected and sequenced 55 isolates sampled from 14 Chinese provinces over the last 40 years and performed a coalescent-based analysis of the G gene. This revealed that the RABV currently circulating in China is composed of three main groups. Bayesian coalescent analysis estimated the date of the most recent common ancestor for the current RABV Chinese strains to be 1412 (with a 95 % confidence interval of 1006-1736). The estimated mean substitution rate for the G gene sequences (3.961x10(-4) substitutions per site per year) was in accordance with previous reports for RABV.

  7. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.

  8. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  9. Dealing with uncertainty in the probability of overtopping of a flood mitigation dam

    NASA Astrophysics Data System (ADS)

    Michailidi, Eleni Maria; Bacchi, Baldassare

    2017-05-01

    In recent years, copula multivariate functions were used to model, probabilistically, the most important variables of flood events: discharge peak, flood volume and duration. However, in most of the cases, the sampling uncertainty, from which small-sized samples suffer, is neglected. In this paper, considering a real reservoir controlled by a dam as a case study, we apply a structure-based approach to estimate the probability of reaching specific reservoir levels, taking into account the key components of an event (flood peak, volume, hydrograph shape) and of the reservoir (rating curve, volume-water depth relation). Additionally, we improve information about the peaks from historical data and reports through a Bayesian framework, allowing the incorporation of supplementary knowledge from different sources and its associated error. As it is seen here, the extra information can result in a very different inferred parameter set and consequently this is reflected as a strong variability of the reservoir level, associated with a given return period. Most importantly, the sampling uncertainty is accounted for in both cases (single-site and multi-site with historical information scenarios), and Monte Carlo confidence intervals for the maximum water level are calculated. It is shown that water levels of specific return periods in a lot of cases overlap, thus making risk assessment, without providing confidence intervals, deceiving.

  10. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    PubMed

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  11. Results of Bayesian methods depend on details of implementation: An example of estimating salmon escapement goals

    USGS Publications Warehouse

    Adkison, Milo D.; Peterman, R.M.

    1996-01-01

    Bayesian methods have been proposed to estimate optimal escapement goals, using both knowledge about physical determinants of salmon productivity and stock-recruitment data. The Bayesian approach has several advantages over many traditional methods for estimating stock productivity: it allows integration of information from diverse sources and provides a framework for decision-making that takes into account uncertainty reflected in the data. However, results can be critically dependent on details of implementation of this approach. For instance, unintended and unwarranted confidence about stock-recruitment relationships can arise if the range of relationships examined is too narrow, if too few discrete alternatives are considered, or if data are contradictory. This unfounded confidence can result in a suboptimal choice of a spawning escapement goal.

  12. BATSE gamma-ray burst line search. 2: Bayesian consistency methodology

    NASA Technical Reports Server (NTRS)

    Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.

    1994-01-01

    We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.

  13. Next Steps in Bayesian Structural Equation Models: Comments on, Variations of, and Extensions to Muthen and Asparouhov (2012)

    ERIC Educational Resources Information Center

    Rindskopf, David

    2012-01-01

    Muthen and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By…

  14. Comparing energy sources for surgical ablation of atrial fibrillation: a Bayesian network meta-analysis of randomized, controlled trials.

    PubMed

    Phan, Kevin; Xie, Ashleigh; Kumar, Narendra; Wong, Sophia; Medi, Caroline; La Meir, Mark; Yan, Tristan D

    2015-08-01

    Simplified maze procedures involving radiofrequency, cryoenergy and microwave energy sources have been increasingly utilized for surgical treatment of atrial fibrillation as an alternative to the traditional cut-and-sew approach. In the absence of direct comparisons, a Bayesian network meta-analysis is another alternative to assess the relative effect of different treatments, using indirect evidence. A Bayesian meta-analysis of indirect evidence was performed using 16 published randomized trials identified from 6 databases. Rank probability analysis was used to rank each intervention in terms of their probability of having the best outcome. Sinus rhythm prevalence beyond the 12-month follow-up was similar between the cut-and-sew, microwave and radiofrequency approaches, which were all ranked better than cryoablation (respectively, 39, 36, and 25 vs 1%). The cut-and-sew maze was ranked worst in terms of mortality outcomes compared with microwave, radiofrequency and cryoenergy (2 vs 19, 34, and 24%, respectively). The cut-and-sew maze procedure was associated with significantly lower stroke rates compared with microwave ablation [odds ratio <0.01; 95% confidence interval 0.00, 0.82], and ranked the best in terms of pacemaker requirements compared with microwave, radiofrequency and cryoenergy (81 vs 14, and 1, <0.01% respectively). Bayesian rank probability analysis shows that the cut-and-sew approach is associated with the best outcomes in terms of sinus rhythm prevalence and stroke outcomes, and remains the gold standard approach for AF treatment. Given the limitations of indirect comparison analysis, these results should be viewed with caution and not over-interpreted. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  15. Multivariable and Bayesian Network Analysis of Outcome Predictors in Acute Aneurysmal Subarachnoid Hemorrhage: Review of a Pure Surgical Series in the Post-International Subarachnoid Aneurysm Trial Era.

    PubMed

    Zador, Zsolt; Huang, Wendy; Sperrin, Matthew; Lawton, Michael T

    2018-06-01

    Following the International Subarachnoid Aneurysm Trial (ISAT), evolving treatment modalities for acute aneurysmal subarachnoid hemorrhage (aSAH) has changed the case mix of patients undergoing urgent surgical clipping. To update our knowledge on outcome predictors by analyzing admission parameters in a pure surgical series using variable importance ranking and machine learning. We reviewed a single surgeon's case series of 226 patients suffering from aSAH treated with urgent surgical clipping. Predictions were made using logistic regression models, and predictive performance was assessed using areas under the receiver operating curve (AUC). We established variable importance ranking using partial Nagelkerke R2 scores. Probabilistic associations between variables were depicted using Bayesian networks, a method of machine learning. Importance ranking showed that World Federation of Neurosurgical Societies (WFNS) grade and age were the most influential outcome prognosticators. Inclusion of only these 2 predictors was sufficient to maintain model performance compared to when all variables were considered (AUC = 0.8222, 95% confidence interval (CI): 0.7646-0.88 vs 0.8218, 95% CI: 0.7616-0.8821, respectively, DeLong's P = .992). Bayesian networks showed that age and WFNS grade were associated with several variables such as laboratory results and cardiorespiratory parameters. Our study is the first to report early outcomes and formal predictor importance ranking following aSAH in a post-ISAT surgical case series. Models showed good predictive power with fewer relevant predictors than in similar size series. Bayesian networks proved to be a powerful tool in visualizing the widespread association of the 2 key predictors with admission variables, explaining their importance and demonstrating the potential for hypothesis generation.

  16. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography

    PubMed Central

    Tweedell, Andrew J.; Haynes, Courtney A.

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897

  17. Risk analysis of new oral anticoagulants for gastrointestinal bleeding and intracranial hemorrhage in atrial fibrillation patients: a systematic review and network meta-analysis.

    PubMed

    Xu, Wei-Wei; Hu, Shen-Jiang; Wu, Tao

    2017-07-01

    Antithrombotic therapy using new oral anticoagulants (NOACs) in patients with atrial fibrillation (AF) has been generally shown to have a favorable risk-benefit profile. Since there has been dispute about the risks of gastrointestinal bleeding (GIB) and intracranial hemorrhage (ICH), we sought to conduct a systematic review and network meta-analysis using Bayesian inference to analyze the risks of GIB and ICH in AF patients taking NOACs. We analyzed data from 20 randomized controlled trials of 91 671 AF patients receiving anticoagulants, antiplatelet drugs, or placebo. Bayesian network meta-analysis of two different evidence networks was performed using a binomial likelihood model, based on a network in which different agents (and doses) were treated as separate nodes. Odds ratios (ORs) and 95% confidence intervals (CIs) were modeled using Markov chain Monte Carlo methods. Indirect comparisons with the Bayesian model confirmed that aspirin+clopidogrel significantly increased the risk of GIB in AF patients compared to the placebo (OR 0.33, 95% CI 0.01-0.92). Warfarin was identified as greatly increasing the risk of ICH compared to edoxaban 30 mg (OR 3.42, 95% CI 1.22-7.24) and dabigatran 110 mg (OR 3.56, 95% CI 1.10-8.45). We further ranked the NOACs for the lowest risk of GIB (apixaban 5 mg) and ICH (apixaban 5 mg, dabigatran 110 mg, and edoxaban 30 mg). Bayesian network meta-analysis of treatment of non-valvular AF patients with anticoagulants suggested that NOACs do not increase risks of GIB and/or ICH, compared to each other.

  18. A Bayesian bird's eye view of ‘Replications of important results in social psychology’

    PubMed Central

    Schönbrodt, Felix D.; Yao, Yuling; Gelman, Andrew; Wagenmakers, Eric-Jan

    2017-01-01

    We applied three Bayesian methods to reanalyse the preregistered contributions to the Social Psychology special issue ‘Replications of Important Results in Social Psychology’ (Nosek & Lakens. 2014 Registered reports: a method to increase the credibility of published results. Soc. Psychol. 45, 137–141. (doi:10.1027/1864-9335/a000192)). First, individual-experiment Bayesian parameter estimation revealed that for directed effect size measures, only three out of 44 central 95% credible intervals did not overlap with zero and fell in the expected direction. For undirected effect size measures, only four out of 59 credible intervals contained values greater than 0.10 (10% of variance explained) and only 19 intervals contained values larger than 0.05. Second, a Bayesian random-effects meta-analysis for all 38 t-tests showed that only one out of the 38 hierarchically estimated credible intervals did not overlap with zero and fell in the expected direction. Third, a Bayes factor hypothesis test was used to quantify the evidence for the null hypothesis against a default one-sided alternative. Only seven out of 60 Bayes factors indicated non-anecdotal support in favour of the alternative hypothesis (BF10>3), whereas 51 Bayes factors indicated at least some support for the null hypothesis. We hope that future analyses of replication success will embrace a more inclusive statistical approach by adopting a wider range of complementary techniques. PMID:28280547

  19. Bayesian modelling to estimate the test characteristics of coprology, coproantigen ELISA and a novel real-time PCR for the diagnosis of taeniasis.

    PubMed

    Praet, Nicolas; Verweij, Jaco J; Mwape, Kabemba E; Phiri, Isaac K; Muma, John B; Zulu, Gideon; van Lieshout, Lisette; Rodriguez-Hidalgo, Richar; Benitez-Ortiz, Washington; Dorny, Pierre; Gabriël, Sarah

    2013-05-01

    To estimate and compare the performances of coprology, copro-Ag ELISA and real-time polymerase chain reaction assay (copro-PCR) for detection of Taenia solium tapeworm carriers. The three diagnostic tests were applied on 817 stool samples collected in two Zambian communities where taeniasis is endemic. A Bayesian approach was used to allow estimation of the test characteristics. Two (0.2%; 95% Confidence Interval (CI): 0-0.8), 67 (8.2%; 95% CI: 6.4-10.3) and 10 (1.2%; 95% CI: 0.5-2.2) samples were positive using coprology, copro-Ag ELISA and copro-PCR, respectively. Specificities of 99.9%, 92.0% and 99.0% were determined for coprology, copro-Ag ELISA and copro-PCR, respectively. Sensitivities of 52.5%, 84.5% and 82.7% were determined for coprology, copro-Ag ELISA and copro-PCR, respectively. We urge for additional studies exploring possible cross-reactions of the copro-Ag ELISA and for the use of more sensitive tests, such as copro-PCR, for the detection of tapeworm carriers, which is a key factor in controlling the parasite in endemic areas. © 2013 Blackwell Publishing Ltd.

  20. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  1. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  2. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  3. RadVel: The Radial Velocity Modeling Toolkit

    NASA Astrophysics Data System (ADS)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-04-01

    RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.

  4. Simulation-based Bayesian inference for latent traits of item response models: Introduction to the ltbayes package for R.

    PubMed

    Johnson, Timothy R; Kuhn, Kristine M

    2015-12-01

    This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.

  5. Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.

    PubMed

    De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C

    2017-06-21

    How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.

  6. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    PubMed

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.

  7. Evaluating propagation method performance over time with Bayesian updating: An application to incubator testing

    USGS Publications Warehouse

    Converse, Sarah J.; Chandler, J. N.; Olsen, Glenn H.; Shafer, C. C.; Hartup, Barry K.; Urbanek, Richard P.

    2010-01-01

    In captive-rearing programs, small sample sizes can limit the quality of information on performance of propagation methods. Bayesian updating can be used to increase information on method performance over time. We demonstrate an application to incubator testing at USGS Patuxent Wildlife Research Center. A new type of incubator was purchased for use in the whooping crane (Grus americana) propagation program, which produces birds for release. We tested the new incubator for reliability, using sandhill crane (Grus canadensis) eggs as surrogates. We determined that the new incubator should result in hatching rates no more than 5% lower than the available incubators, with 95% confidence, before it would be used to incubate whooping crane eggs. In 2007, 5 healthy chicks hatched from 12 eggs in the new incubator, and 2 hatched from 5 in an available incubator, for a median posterior difference of <1%, but with a large 95% credible interval (-41%, 43%). In 2008, we implemented a double-blind evaluation method, where a veterinarian determined whether eggs produced chicks that, at hatching, had no apparent health problems that would impede future release. We used the 2007 estimates as priors in the 2008 analysis. In 2008, 7 normal chicks hatched from 15 eggs in the new incubator, and 11 hatched from 15 in an available incubator, for a median posterior difference of 19%, with 95% credible interval (-8%, 44%). The increased sample size has increased our understanding of incubator performance. While additional data will be collected, at this time the new incubator does not appear adequate for use with whooping crane eggs.

  8. Parameter Estimation for Compact Binaries with Ground-Based Gravitational-Wave Observations Using the LALInference

    NASA Technical Reports Server (NTRS)

    Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.

    2015-01-01

    The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.

  9. A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects.

    PubMed

    Henschel, Volkmar; Engel, Jutta; Hölzel, Dieter; Mansmann, Ulrich

    2009-02-10

    Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.

  10. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, Kevin

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive featuremore » of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.« less

  11. Confident difference criterion: a new Bayesian differentially expressed gene selection algorithm with applications.

    PubMed

    Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S

    2015-08-07

    Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.

  12. Generalizability of Evidence-Based Assessment Recommendations for Pediatric Bipolar Disorder

    PubMed Central

    Jenkins, Melissa M.; Youngstrom, Eric A.; Youngstrom, Jennifer Kogos; Feeny, Norah C.; Findling, Robert L.

    2013-01-01

    Bipolar disorder is frequently clinically diagnosed in youths who do not actually satisfy DSM-IV criteria, yet cases that would satisfy full DSM-IV criteria are often undetected clinically. Evidence-based assessment methods that incorporate Bayesian reasoning have demonstrated improved diagnostic accuracy, and consistency; however, their clinical utility is largely unexplored. The present study examines the effectiveness of promising evidence-based decision-making compared to the clinical gold standard. Participants were 562 youth, ages 5-17 and predominantly African American, drawn from a community mental health clinic. Research diagnoses combined semi-structured interview with youths’ psychiatric, developmental, and family mental health histories. Independent Bayesian estimates relied on published risk estimates from other samples discriminated bipolar diagnoses, Area Under Curve=.75, p<.00005. The Bayes and confidence ratings correlated rs =.30. Agreement about an evidence-based assessment intervention “threshold model” (wait/assess/treat) had K=.24, p<.05. No potential moderators of agreement between the Bayesian estimates and confidence ratings, including type of bipolar illness, were significant. Bayesian risk estimates were highly correlated with logistic regression estimates using optimal sample weights, r=.81, p<.0005. Clinical and Bayesian approaches agree in terms of overall concordance and deciding next clinical action, even when Bayesian predictions are based on published estimates from clinically and demographically different samples. Evidence-based assessment methods may be useful in settings that cannot routinely employ gold standard assessments, and they may help decrease rates of overdiagnosis while promoting earlier identification of true cases. PMID:22004538

  13. Efficient Dependency Computation for Dynamic Hybrid Bayesian Network in On-line System Health Management Applications

    DTIC Science & Technology

    2014-10-02

    intervals (Neil, Tailor, Marquez, Fenton , & Hear, 2007). This is cumbersome, error prone and usually inaccurate. Even though a universal framework...Science. Neil, M., Tailor, M., Marquez, D., Fenton , N., & Hear. (2007). Inference in Bayesian networks using dynamic discretisation. Statistics

  14. A Systematic Review of the Literature on Cystic Echinococcosis Frequency Worldwide and Its Associated Clinical Manifestations

    PubMed Central

    Budke, Christine M.; Carabin, Hélène; Ndimubanzi, Patrick C.; Nguyen, Hai; Rainwater, Elizabeth; Dickey, Mary; Bhattarai, Rachana; Zeziulin, Oleksandr; Qian, Men-Bao

    2013-01-01

    A systematic literature review of cystic echinoccocosis (CE) frequency and symptoms was conducted. Studies without denominators, original data, or using one serological test were excluded. Random-effect log-binomial models were run for CE frequency and proportion of reported symptoms where appropriate. A total of 45 and 25 articles on CE frequency and symptoms met all inclusion criteria. Prevalence of CE ranged from 1% to 7% in community-based studies and incidence rates ranged from 0 to 32 cases per 100,000 in hospital-based studies. The CE prevalence was higher in females (Prevalence Proportion Ratio: 1.35 [95% Bayesian Credible Interval: 1.16–1.53]) and increased with age. The most common manifestations of hepatic and pulmonary CE were abdominal pain (57.3% [95% confidence interval [CI]: 37.3–76.1%]) and cough (51.3% [95% CI: 35.7–66.7%]), respectively. The results are limited by the small number of unbiased studies. Nonetheless, the age/gender prevalence differences could be used to inform future models of CE burden. PMID:23546806

  15. A systematic review of the literature on cystic echinococcosis frequency worldwide and its associated clinical manifestations.

    PubMed

    Budke, Christine M; Carabin, Hélène; Ndimubanzi, Patrick C; Nguyen, Hai; Rainwater, Elizabeth; Dickey, Mary; Bhattarai, Rachana; Zeziulin, Oleksandr; Qian, Men-Bao

    2013-06-01

    A systematic literature review of cystic echinoccocosis (CE) frequency and symptoms was conducted. Studies without denominators, original data, or using one serological test were excluded. Random-effect log-binomial models were run for CE frequency and proportion of reported symptoms where appropriate. A total of 45 and 25 articles on CE frequency and symptoms met all inclusion criteria. Prevalence of CE ranged from 1% to 7% in community-based studies and incidence rates ranged from 0 to 32 cases per 100,000 in hospital-based studies. The CE prevalence was higher in females (Prevalence Proportion Ratio: 1.35 [95% Bayesian Credible Interval: 1.16-1.53]) and increased with age. The most common manifestations of hepatic and pulmonary CE were abdominal pain (57.3% [95% confidence interval [CI]: 37.3-76.1%]) and cough (51.3% [95% CI: 35.7-66.7%]), respectively. The results are limited by the small number of unbiased studies. Nonetheless, the age/gender prevalence differences could be used to inform future models of CE burden.

  16. Confidence set interference with a prior quadratic bound. [in geophysics

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1989-01-01

    Neyman's (1937) theory of confidence sets is developed as a replacement for Bayesian interference (BI) and stochastic inversion (SI) when the prior information is a hard quadratic bound. It is recommended that BI and SI be replaced by confidence set interference (CSI) only in certain circumstances. The geomagnetic problem is used to illustrate the general theory of CSI.

  17. Reducing the width of confidence intervals for the difference between two population means by inverting adaptive tests.

    PubMed

    O'Gorman, Thomas W

    2018-05-01

    In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.

  18. Measurements of neutrino oscillation in appearance and disappearance channels by the T2K experiment with 6.6 ×1 020 protons on target

    NASA Astrophysics Data System (ADS)

    Abe, K.; Adam, J.; Aihara, H.; Akiri, T.; Andreopoulos, C.; Aoki, S.; Ariga, A.; Assylbekov, S.; Autiero, D.; Barbi, M.; Barker, G. J.; Barr, G.; Bartet-Friburg, P.; Bass, M.; Batkiewicz, M.; Bay, F.; Berardi, V.; Berger, B. E.; Berkman, S.; Bhadra, S.; Blaszczyk, F. d. M.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buchanan, N.; Calland, R. G.; Caravaca Rodríguez, J.; Cartwright, S. L.; Castillo, R.; Catanesi, M. G.; Cervera, A.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Clifton, A.; Coleman, J.; Coleman, S. J.; Collazuol, G.; Connolly, K.; Cremonesi, L.; Dabrowska, A.; Danko, I.; Das, R.; Davis, S.; de Perio, P.; De Rosa, G.; Dealtry, T.; Dennis, S. R.; Densham, C.; Dewhurst, D.; Di Lodovico, F.; Di Luise, S.; Dolan, S.; Drapier, O.; Duboyski, T.; Duffy, K.; Dumarchez, J.; Dytman, S.; Dziewiecki, M.; Emery-Schrenk, S.; Ereditato, A.; Escudero, L.; Ferchichi, C.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Friend, M.; Fujii, Y.; Fukuda, Y.; Furmanski, A. P.; Galymov, V.; Garcia, A.; Giffin, S.; Giganti, C.; Gilje, K.; Goeldi, D.; Golan, T.; Gonin, M.; Grant, N.; Gudin, D.; Hadley, D. R.; Haegel, L.; Haesler, A.; Haigh, M. D.; Hamilton, P.; Hansen, D.; Hara, T.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Hearty, C.; Helmer, R. L.; Hierholzer, M.; Hignight, J.; Hillairet, A.; Himmel, A.; Hiraki, T.; Hirota, S.; Holeczek, J.; Horikawa, S.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ieki, K.; Ieva, M.; Ikeda, M.; Imber, J.; Insler, J.; Irvine, T. J.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Iyogi, K.; Izmaylov, A.; Jacob, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jo, J. H.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Kanazawa, Y.; Karlen, D.; Karpikov, I.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kielczewska, D.; Kikawa, T.; Kilinski, A.; Kim, J.; King, S.; Kisiel, J.; Kitching, P.; Kobayashi, T.; Koch, L.; Koga, T.; Kolaceke, A.; Konaka, A.; Kopylov, A.; Kormos, L. L.; Korzenev, A.; Koshio, Y.; Kropp, W.; Kubo, H.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Larkin, E.; Laveder, M.; Lawe, M.; Lazos, M.; Lindner, T.; Lister, C.; Litchfield, R. P.; Longhin, A.; Lopez, J. P.; Ludovici, L.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Marino, A. D.; Marteau, J.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Mijakowski, P.; Miller, C. A.; Minamino, A.; Mineev, O.; Missert, A.; Miura, M.; Moriyama, S.; Mueller, Th. A.; Murakami, A.; Murdoch, M.; Murphy, S.; Myslik, J.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nirkko, M.; Nishikawa, K.; Nishimura, Y.; Nowak, J.; O'Keeffe, H. M.; Ohta, R.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Payne, D.; Perevozchikov, O.; Perkin, J. D.; Petrov, Y.; Pickard, L.; Pinzon Guerra, E. S.; Pistillo, C.; Plonski, P.; Poplawska, E.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Poutissou, R.; Przewlocki, P.; Quilain, B.; Radicioni, E.; Ratoff, P. N.; Ravonel, M.; Rayner, M. A. M.; Redij, A.; Reeves, M.; Reinherz-Aronis, E.; Riccio, C.; Rodrigues, P. A.; Rojas, P.; Rondio, E.; Roth, S.; Rubbia, A.; Ruterbories, D.; Rychter, A.; Sacco, R.; Sakashita, K.; Sánchez, F.; Sato, F.; Scantamburlo, E.; Scholberg, K.; Schoppmann, S.; Schwehr, J. D.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaker, F.; Shaw, D.; Shiozawa, M.; Short, S.; Shustrov, Y.; Sinclair, P.; Smith, B.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Sorel, M.; Southwell, L.; Stamoulis, P.; Steinmann, J.; Still, B.; Suda, Y.; Suzuki, A.; Suzuki, K.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takahashi, S.; Takeda, A.; Takeuchi, Y.; Tanaka, H. K.; Tanaka, H. A.; Tanaka, M. M.; Terhorst, D.; Terri, R.; Thompson, L. F.; Thorley, A.; Tobayama, S.; Toki, W.; Tomura, T.; Touramanis, C.; Tsukamoto, T.; Tzanov, M.; Uchida, Y.; Vacheret, A.; Vagins, M.; Vasseur, G.; Wachala, T.; Wakamatsu, K.; Walter, C. W.; Wark, D.; Warzycha, W.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilkes, R. J.; Wilking, M. J.; Wilkinson, C.; Williamson, Z.; Wilson, J. R.; Wilson, R. J.; Wongjirad, T.; Yamada, Y.; Yamamoto, K.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yoo, J.; Yoshida, K.; Yuan, T.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; Żmuda, J.; T2K Collaboration

    2015-04-01

    We report on measurements of neutrino oscillation using data from the T2K long-baseline neutrino experiment collected between 2010 and 2013. In an analysis of muon neutrino disappearance alone, we find the following estimates and 68% confidence intervals for the two possible mass hierarchies: normal hierarchy: sin2θ23=0.51 4-0.056+0.055 and Δ m322=(2.51 ±0.10 )×1 0-3 eV2/c4 and inverted hierarchy: sin2θ23=0.511 ±0.055 and Δ m132=(2.48 ±0.10 )×1 0-3 eV2/c4 . The analysis accounts for multinucleon mechanisms in neutrino interactions which were found to introduce negligible bias. We describe our first analyses that combine measurements of muon neutrino disappearance and electron neutrino appearance to estimate four oscillation parameters, |Δ m2|, sin2θ23, sin2θ13, δC P, and the mass hierarchy. Frequentist and Bayesian intervals are presented for combinations of these parameters, with and without including recent reactor measurements. At 90% confidence level and including reactor measurements, we exclude the region δC P=[0.15 ,0.83 ]π for normal hierarchy and δC P=[-0.08 ,1.09 ]π for inverted hierarchy. The T2K and reactor data weakly favor the normal hierarchy with a Bayes factor of 2.2. The most probable values and 68% one-dimensional credible intervals for the other oscillation parameters, when reactor data are included, are sin2θ23=0.52 8-0.038+0.055 and |Δ m322 |=(2.51 ±0.11 )×1 0-3 eV2/c4 .

  19. An introduction to using Bayesian linear regression with clinical data.

    PubMed

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Should Perioperative Supplemental Oxygen Be Routinely Recommended for Surgical Patients? A Bayesian Meta-analysis

    PubMed Central

    Kao, Lillian S.; Millas, Stefanos G.; Pedroza, Claudia; Tyson, Jon E.; Lally, Kevin P.

    2012-01-01

    Objective The purpose of this study is to use updated data and Bayesian methods to evaluate the effectiveness of hyperoxia to reduce surgical site infections (SSIs) and/or mortality in both colorectal and all surgical patients. Because few trials assessed potential harms of hyperoxia, hazards were not included. Background Use of hyperoxia to reduce SSIs is controversial. Three recent meta-analyses have had conflicting conclusions. Methods A systematic literature search and review were performed. Traditional fixed-effect and random-effects meta-analyses and Bayesian meta-analysis were performed to evaluate SSIs and mortality. Results Traditional meta-analysis yielded a relative risk of an SSI with hyperoxia among all surgery patients of 0.84 (95% confidence interval, CI, 0.73–0.97) and 0.84 (95% CI 0.61–1.16) for the fixed-effect and random effects models respectively. The probabilities of any risk reduction in SSIs among all surgery patients were 77%, 81%, and 83% for skeptical, neutral, and enthusiastic priors. Subset analysis of colorectal surgery patients increased the probabilities to 86%, 89%, and 92%. The probabilities of at least a 10% reduction were 57%, 62%, and 68% for all surgical patients and 71%, 75%, and 80% among the colorectal surgery subset. Conclusions There is a moderately high probability of a benefit to hyperoxia in reducing SSIs in colorectal surgery patients; however, the magnitude of benefit is relatively small and might not exceed treatment hazards. Further studies should focus on generalizability to other patient populations or on treatment hazards and other outcomes. PMID:23160100

  2. Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.

    2008-12-01

    The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.

  3. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  4. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  5. Can 3-dimensional power Doppler indices improve the prenatal diagnosis of a potentially morbidly adherent placenta in patients with placenta previa?

    PubMed

    Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J

    2017-08-01

    Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Bayesian Correlation Analysis for Sequence Count Data

    PubMed Central

    Lau, Nelson; Perkins, Theodore J.

    2016-01-01

    Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities’ measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low—especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities’ signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset. PMID:27701449

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaidos, Eric, E-mail: gaidos@hawaii.edu

    A key goal of the Kepler mission is the discovery of Earth-size transiting planets in ''habitable zones'' where stellar irradiance maintains a temperate climate on an Earth-like planet. Robust estimates of planet radius and irradiance require accurate stellar parameters, but most Kepler systems are faint, making spectroscopy difficult and prioritization of targets desirable. The parameters of 2035 host stars were estimated by Bayesian analysis and the probabilities p{sub HZ} that 2738 candidate or confirmed planets orbit in the habitable zone were calculated. Dartmouth Stellar Evolution Program models were compared to photometry from the Kepler Input Catalog, priors for stellar mass,more » age, metallicity and distance, and planet transit duration. The analysis yielded probability density functions for calculating confidence intervals of planet radius and stellar irradiance, as well as p{sub HZ}. Sixty-two planets have p{sub HZ} > 0.5 and a most probable stellar irradiance within habitable zone limits. Fourteen of these have radii less than twice the Earth; the objects most resembling Earth in terms of radius and irradiance are KOIs 2626.01 and 3010.01, which orbit late K/M-type dwarf stars. The fraction of Kepler dwarf stars with Earth-size planets in the habitable zone ({eta}{sub Circled-Plus }) is 0.46, with a 95% confidence interval of 0.31-0.64. Parallaxes from the Gaia mission will reduce uncertainties by more than a factor of five and permit definitive assignments of transiting planets to the habitable zones of Kepler stars.« less

  8. Pre- versus post-mass extinction divergence of Mesozoic marine reptiles dictated by time-scale dependence of evolutionary rates.

    PubMed

    Motani, Ryosuke; Jiang, Da-Yong; Tintori, Andrea; Ji, Cheng; Huang, Jian-Dong

    2017-05-17

    The fossil record of a major clade often starts after a mass extinction even though evolutionary rates, molecular or morphological, suggest its pre-extinction emergence (e.g. squamates, placentals and teleosts). The discrepancy is larger for older clades, and the presence of a time-scale-dependent methodological bias has been suggested, yet it has been difficult to avoid the bias using Bayesian phylogenetic methods. This paradox raises the question of whether ecological vacancies, such as those after mass extinctions, prompt the radiations. We addressed this problem by using a unique temporal characteristic of the morphological data and a high-resolution stratigraphic record, for the oldest clade of Mesozoic marine reptiles, Ichthyosauromorpha. The evolutionary rate was fastest during the first few million years of ichthyosauromorph evolution and became progressively slower over time, eventually becoming six times slower. Using the later slower rates, estimates of divergence time become excessively older. The fast, initial rate suggests the emergence of ichthyosauromorphs after the end-Permian mass extinction, matching an independent result from high-resolution stratigraphic confidence intervals. These reptiles probably invaded the sea as a new ecosystem was formed after the end-Permian mass extinction. Lack of information on early evolution biased Bayesian clock rates. © 2017 The Author(s).

  9. Combining statistical inference and decisions in ecology

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.

    2016-01-01

    Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation, and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem.

  10. A Bayesian network meta-analysis of whole brain radiotherapy and stereotactic radiotherapy for brain metastasis.

    PubMed

    Yuan, Xi; Liu, Wen-Jie; Li, Bing; Shen, Ze-Tian; Shen, Jun-Shu; Zhu, Xi-Xu

    2017-08-01

    This study was conducted to compare the effects of whole brain radiotherapy (WBRT) and stereotactic radiotherapy (SRS) in treatment of brain metastasis.A systematical retrieval in PubMed and Embase databases was performed for relative literatures on the effects of WBRT and SRS in treatment of brain metastasis. A Bayesian network meta-analysis was performed by using the ADDIS software. The effect sizes included odds ratio (OR) and 95% confidence interval (CI). A random effects model was used for the pooled analysis for all the outcome measures, including 1-year distant control rate, 1-year local control rate, 1-year survival rate, and complication. The consistency was tested by using node-splitting analysis and inconsistency standard deviation. The convergence was estimated according to the Brooks-Gelman-Rubin method.A total of 12 literatures were included in this meta-analysis. WBRT + SRS showed higher 1-year distant control rate than SRS. WBRT + SRS was better for the 1-year local control rate than WBRT. SRS and WBRT + SRS had higher 1-year survival rate than the WBRT. In addition, there was no difference in complication among the three therapies.Comprehensively, WBRT + SRS might be the choice of treatment for brain metastasis.

  11. Pre- versus post-mass extinction divergence of Mesozoic marine reptiles dictated by time-scale dependence of evolutionary rates

    PubMed Central

    Ji, Cheng; Huang, Jian-dong

    2017-01-01

    The fossil record of a major clade often starts after a mass extinction even though evolutionary rates, molecular or morphological, suggest its pre-extinction emergence (e.g. squamates, placentals and teleosts). The discrepancy is larger for older clades, and the presence of a time-scale-dependent methodological bias has been suggested, yet it has been difficult to avoid the bias using Bayesian phylogenetic methods. This paradox raises the question of whether ecological vacancies, such as those after mass extinctions, prompt the radiations. We addressed this problem by using a unique temporal characteristic of the morphological data and a high-resolution stratigraphic record, for the oldest clade of Mesozoic marine reptiles, Ichthyosauromorpha. The evolutionary rate was fastest during the first few million years of ichthyosauromorph evolution and became progressively slower over time, eventually becoming six times slower. Using the later slower rates, estimates of divergence time become excessively older. The fast, initial rate suggests the emergence of ichthyosauromorphs after the end-Permian mass extinction, matching an independent result from high-resolution stratigraphic confidence intervals. These reptiles probably invaded the sea as a new ecosystem was formed after the end-Permian mass extinction. Lack of information on early evolution biased Bayesian clock rates. PMID:28515201

  12. Prior robust empirical Bayes inference for large-scale data by conditioning on rank with application to microarray data

    PubMed Central

    Liao, J. G.; Mcmurry, Timothy; Berg, Arthur

    2014-01-01

    Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072

  13. Phylogeography of the Western Lyresnake (Trimorphodon biscutatus): testing aridland biogeographical hypotheses across the Nearctic-Neotropical transition.

    PubMed

    Devitt, Thomas J

    2006-12-01

    The Western Lyresnake (Trimorphodon biscutatus) is a widespread, polytypic taxon inhabiting arid regions from the warm deserts of the southwestern United States southward along the Pacific versant of Mexico to the tropical deciduous forests of Mesoamerica. This broadly distributed species provides a unique opportunity to evaluate a priori biogeographical hypotheses spanning two major distinct biogeographical realms (the Nearctic and Neotropical) that are usually treated separately in phylogeographical analyses. I investigated the phylogeography of T. biscutatus using maximum likelihood and Bayesian phylogenetic analysis of mitochondrial DNA (mtDNA) from across this species' range. Phylogenetic analyses recovered five well-supported clades whose boundaries are concordant with existing geographical barriers, a pattern consistent with a model of vicariant allopatric divergence. Assuming a vicariance model, divergence times between mitochondrial lineages were estimated using Bayesian relaxed molecular clock methods calibrated using geological information from putative vicariant events. Divergence time point estimates were bounded by broad confidence intervals, and thus these highly conservative estimates should be considered tentative hypotheses at best. Comparison of mtDNA lineages and taxa traditionally recognized as subspecies based on morphology suggest this taxon is comprised of multiple independent lineages at various stages of divergence, ranging from putative secondary contact and hybridization to sympatry of 'subspecies'.

  14. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  15. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  16. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  17. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    NASA Astrophysics Data System (ADS)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when the predicted tension time series were within the 95% CI which is derived from the calibration site using DREAM scheme.

  18. Analysis of femtosecond pump-probe photoelectron-photoion coincidence measurements applying Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Rumetshofer, M.; Heim, P.; Thaler, B.; Ernst, W. E.; Koch, M.; von der Linden, W.

    2018-06-01

    Ultrafast dynamical processes in photoexcited molecules can be observed with pump-probe measurements, in which information about the dynamics is obtained from the transient signal associated with the excited state. Background signals provoked by pump and/or probe pulses alone often obscure these excited-state signals. Simple subtraction of pump-only and/or probe-only measurements from the pump-probe measurement, as commonly applied, results in a degradation of the signal-to-noise ratio and, in the case of coincidence detection, the danger of overrated background subtraction. Coincidence measurements additionally suffer from false coincidences, requiring long data-acquisition times to keep erroneous signals at an acceptable level. Here we present a probabilistic approach based on Bayesian probability theory that overcomes these problems. For a pump-probe experiment with photoelectron-photoion coincidence detection, we reconstruct the interesting excited-state spectrum from pump-probe and pump-only measurements. This approach allows us to treat background and false coincidences consistently and on the same footing. We demonstrate that the Bayesian formalism has the following advantages over simple signal subtraction: (i) the signal-to-noise ratio is significantly increased, (ii) the pump-only contribution is not overestimated, (iii) false coincidences are excluded, (iv) prior knowledge, such as positivity, is consistently incorporated, (v) confidence intervals are provided for the reconstructed spectrum, and (vi) it is applicable to any experimental situation and noise statistics. Most importantly, by accounting for false coincidences, the Bayesian approach allows us to run experiments at higher ionization rates, resulting in a significant reduction of data acquisition times. The probabilistic approach is thoroughly scrutinized by challenging mock data. The application to pump-probe coincidence measurements on acetone molecules enables quantitative interpretations about the molecular decay dynamics and fragmentation behavior. All results underline the superiority of a consistent probabilistic approach over ad hoc estimations.

  19. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  20. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  1. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes.

    PubMed

    Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Friston, Karl

    2015-10-01

    Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a "limited offer" game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. © The Author 2014. Published by Oxford University Press.

  2. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Friston, Karl

    2015-01-01

    Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a “limited offer” game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. PMID:25056572

  3. Decision time and confidence predict choosers' identification performance in photographic showups

    PubMed Central

    Sagana, Anna; Sporer, Siegfried L.; Wixted, John T.

    2018-01-01

    In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence. PMID:29346394

  4. Decision time and confidence predict choosers' identification performance in photographic showups.

    PubMed

    Sauerland, Melanie; Sagana, Anna; Sporer, Siegfried L; Wixted, John T

    2018-01-01

    In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence.

  5. Analysis of trend changes in Northern African palaeo-climate by using Bayesian inference

    NASA Astrophysics Data System (ADS)

    Schütz, Nadine; Trauth, Martin H.; Holschneider, Matthias

    2010-05-01

    Climate variability of Northern Africa is of high interest due to climate-evolutionary linkages under study. The reconstruction of the palaeo-climate over long time scales, including the expected linkages (> 3 Ma), is mainly accessible by proxy data from deep sea drilling cores. By concentrating on published data sets, we try to decipher rhythms and trends to detect correlations between different proxy time series by advanced mathematical methods. Our preliminary data is dust concentration, as an indicator for climatic changes such as humidity, from the ODP sites 659, 721 and 967 situated around Northern Africa. Our interest is in challenging the available time series with advanced statistical methods to detect significant trend changes and to compare different model assumptions. For that purpose, we want to avoid the rescaling of the time axis to obtain equidistant time steps for filtering methods. Additionally we demand an plausible description of the errors for the estimated parameters, in terms of confidence intervals. Finally, depending on what model we restrict on, we also want an insight in the parameter structure of the assumed models. To gain this information, we focus on Bayesian inference by formulating the problem as a linear mixed model, so that the expectation and deviation are of linear structure. By using the Bayesian method we can formulate the posteriori density as a function of the model parameters and calculate this probability density in the parameter space. Depending which parameters are of interest, we analytically and numerically marginalize the posteriori with respect to the remaining parameters of less interest. We apply a simple linear mixed model to calculate the posteriori densities of the ODP sites 659 and 721 concerning the last 5 Ma at maximum. From preliminary calculations on these data sets, we can confirm results gained by the method of breakfit regression combined with block bootstrapping ([1]). We obtain a significant change point around (1.63 - 1.82) Ma, which correlates with a global climate transition due to the establishment of the Walker circulation ([2]). Furthermore we detect another significant change point around (2.7 - 3.2) Ma, which correlates with the end of the Pliocene warm period (permanent El Niño-like conditions) and the onset of a colder global climate ([3], [4]). The discussion on the algorithm, the results of calculated confidence intervals, the available information about the applied model in the parameter space and the comparison of multiple change point models will be presented. [1] Trauth, M.H., et al., Quaternary Science Reviews, 28, 2009 [2] Wara, M.W., et al., Science, Vol. 309, 2005 [3] Chiang, J.C.H., Annual Review of Earth and Planetary Sciences, Vol. 37, 2009 [4] deMenocal, P., Earth and Planetary Science Letters, 220, 2004

  6. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  7. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  8. Error Quantification and Confidence Assessment of Aerothermal Model Predictions for Hypersonic Aircraft (Preprint)

    DTIC Science & Technology

    2013-09-01

    based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D

  9. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  10. Exact Scheffé-type confidence intervals for output from groundwater flow models: 2. Combined use of hydrogeologic information and calibration data

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.

  11. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    PubMed

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  12. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  13. Myocardial perfusion magnetic resonance imaging using sliding-window conjugate-gradient highly constrained back-projection reconstruction for detection of coronary artery disease.

    PubMed

    Ma, Heng; Yang, Jun; Liu, Jing; Ge, Lan; An, Jing; Tang, Qing; Li, Han; Zhang, Yu; Chen, David; Wang, Yong; Liu, Jiabin; Liang, Zhigang; Lin, Kai; Jin, Lixin; Bi, Xiaoming; Li, Kuncheng; Li, Debiao

    2012-04-15

    Myocardial perfusion magnetic resonance imaging (MRI) with sliding-window conjugate-gradient highly constrained back-projection reconstruction (SW-CG-HYPR) allows whole left ventricular coverage, improved temporal and spatial resolution and signal/noise ratio, and reduced cardiac motion-related image artifacts. The accuracy of this technique for detecting coronary artery disease (CAD) has not been determined in a large number of patients. We prospectively evaluated the diagnostic performance of myocardial perfusion MRI with SW-CG-HYPR in patients with suspected CAD. A total of 50 consecutive patients who were scheduled for coronary angiography with suspected CAD underwent myocardial perfusion MRI with SW-CG-HYPR at 3.0 T. The perfusion defects were interpreted qualitatively by 2 blinded observers and were correlated with x-ray angiographic stenoses ≥50%. The prevalence of CAD was 56%. In the per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SW-CG-HYPR was 96% (95% confidence interval 82% to 100%), 82% (95% confidence interval 60% to 95%), 87% (95% confidence interval 70% to 96%), 95% (95% confidence interval 74% to100%), and 90% (95% confidence interval 82% to 98%), respectively. In the per-vessel analysis, the corresponding values were 98% (95% confidence interval 91% to 100%), 89% (95% confidence interval 80% to 94%), 86% (95% confidence interval 76% to 93%), 99% (95% confidence interval 93% to 100%), and 93% (95% confidence interval 89% to 97%), respectively. In conclusion, myocardial perfusion MRI using SW-CG-HYPR allows whole left ventricular coverage and high resolution and has high diagnostic accuracy in patients with suspected CAD. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Daily average temperature and mortality among the elderly: a meta-analysis and systematic review of epidemiological evidence

    NASA Astrophysics Data System (ADS)

    Yu, Weiwei; Mengersen, Kerrie; Wang, Xiaoyu; Ye, Xiaofang; Guo, Yuming; Pan, Xiaochuan; Tong, Shilu

    2012-07-01

    The impact of climate change on the health of vulnerable groups such as the elderly has been of increasing concern. However, to date there has been no meta-analysis of current literature relating to the effects of temperature fluctuations upon mortality amongst the elderly. We synthesised risk estimates of the overall impact of daily mean temperature on elderly mortality across different continents. A comprehensive literature search was conducted using MEDLINE and PubMed to identify papers published up to December 2010. Selection criteria including suitable temperature indicators, endpoints, study-designs and identification of threshold were used. A two-stage Bayesian hierarchical model was performed to summarise the percent increase in mortality with a 1°C temperature increase (or decrease) with 95% confidence intervals in hot (or cold) days, with lagged effects also measured. Fifteen studies met the eligibility criteria and almost 13 million elderly deaths were included in this meta-analysis. In total, there was a 2-5% increase for a 1°C increment during hot temperature intervals, and a 1-2 % increase in all-cause mortality for a 1°C decrease during cold temperature intervals. Lags of up to 9 days in exposure to cold temperature intervals were substantially associated with all-cause mortality, but no substantial lagged effects were observed for hot intervals. Thus, both hot and cold temperatures substantially increased mortality among the elderly, but the magnitude of heat-related effects seemed to be larger than that of cold effects within a global context.

  15. Small area variation in diabetes prevalence in Puerto Rico.

    PubMed

    Tierney, Edward F; Burrows, Nilka R; Barker, Lawrence E; Beckles, Gloria L; Boyle, James P; Cadwell, Betsy L; Kirtland, Karen A; Thompson, Theodore J

    2013-06-01

    To estimate the 2009 prevalence of diagnosed diabetes in Puerto Rico among adults ≥ 20 years of age in order to gain a better understanding of its geographic distribution so that policymakers can more efficiently target prevention and control programs. A Bayesian multilevel model was fitted to the combined 2008-2010 Behavioral Risk Factor Surveillance System and 2009 United States Census data to estimate diabetes prevalence for each of the 78 municipios (counties) in Puerto Rico. The mean unadjusted estimate for all counties was 14.3% (range by county, 9.9%-18.0%). The average width of the confidence intervals was 6.2%. Adjusted and unadjusted estimates differed little. These 78 county estimates are higher on average and showed less variability (i.e., had a smaller range) than the previously published estimates of the 2008 diabetes prevalence for all United States counties (mean, 9.9%; range, 3.0%-18.2%).

  16. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  17. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brouwer, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hackett, B.; Verlaan, M.; Fanjul, E. A.

    2012-03-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.

  18. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study

    PubMed Central

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang

    2016-01-01

    Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727

  19. Diagnostic efficacy of microscopy, rapid diagnostic test and polymerase chain reaction for malaria using bayesian latent class analysis.

    PubMed

    Saha, Sreemanti; Narang, Rahul; Deshmukh, Pradeep; Pote, Kiran; Anvikar, Anup; Narang, Pratibha

    2017-01-01

    The diagnostic techniques for malaria are undergoing a change depending on the availability of newer diagnostics and annual parasite index of infection in a particular area. At the country level, guidelines are available for selection of diagnostic tests; however, at the local level, this decision is made based on malaria situation in the area. The tests are evaluated against the gold standard, and if that standard has limitations, it becomes difficult to compare other available tests. Bayesian latent class analysis computes its internal standard rather than using the conventional gold standard and helps comparison of various tests including the conventional gold standard. In a cross-sectional study conducted in a tertiary care hospital setting, we have evaluated smear microscopy, rapid diagnostic test (RDT), and polymerase chain reaction (PCR) for diagnosis of malaria using Bayesian latent class analysis. We found the magnitude of malaria to be 17.7% (95% confidence interval: 12.5%-23.9%) among the study subjects. In the present study, the sensitivity of microscopy was 63%, but it had very high specificity (99.4%). Sensitivity and specificity of RDT and PCR were high with RDT having a marginally higher sensitivity (94% vs. 90%) and specificity (99% vs. 95%). On comparison of likelihood ratios (LRs), RDT had the highest LR for positive test result (175) and the lowest LR for negative test result (0.058) among the three tests. In settings like ours conventional smear microscopy may be replaced with RDT and as we move toward elimination and facilities become available PCR may be roped into detect cases with lower parasitaemia.

  20. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    ERIC Educational Resources Information Center

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  1. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  2. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.

    PubMed

    Erdoğan, Semra; Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.

  3. Modified Confidence Intervals for the Mean of an Autoregressive Process.

    DTIC Science & Technology

    1985-08-01

    Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our

  4. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

  5. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  6. Trends and racial and ethnic disparities in the prevalence of pregestational type 1 and type 2 diabetes in Northern California: 1996-2014.

    PubMed

    Peng, Tiffany Y; Ehrlich, Samantha F; Crites, Yvonne; Kitzmiller, John L; Kuzniewicz, Michael W; Hedderson, Monique M; Ferrara, Assiamira

    2017-02-01

    Despite concern for adverse perinatal outcomes in women with diabetes mellitus before pregnancy, recent data on the prevalence of pregestational type 1 and type 2 diabetes mellitus in the United States are lacking. The purpose of this study was to estimate changes in the prevalence of overall pregestational diabetes mellitus (all types) and pregestational type 1 and type 2 diabetes mellitus and to estimate whether changes varied by race-ethnicity from 1996-2014. We conducted a cohort study among 655,428 pregnancies at a Northern California integrated health delivery system from 1996-2014. Logistic regression analyses provided estimates of prevalence and trends. The age-adjusted prevalence (per 100 deliveries) of overall pregestational diabetes mellitus increased from 1996-1999 to 2012-2014 (from 0.58 [95% confidence interval, 0.54-0.63] to 1.06 [95% confidence interval, 1.00-1.12]; P trend <.0001). Significant increases occurred in all racial-ethnic groups; the largest relative increase was among Hispanic women (121.8% [95% confidence interval, 84.4-166.7]); the smallest relative increase was among non-Hispanic white women (49.6% [95% confidence interval, 27.5-75.4]). The age-adjusted prevalence of pregestational type 1 and type 2 diabetes mellitus increased from 0.14 (95% confidence interval, 0.12-0.16) to 0.23 (95% confidence interval, 0.21-0.27; P trend <.0001) and from 0.42 (95% confidence interval, 0.38-0.46) to 0.78 (95% confidence interval, 0.73-0.83; P trend <.0001), respectively. The greatest relative increase in the prevalence of type 1 diabetes mellitus was in non-Hispanic white women (118.4% [95% confidence interval, 70.0-180.5]), who had the lowest increases in the prevalence of type 2 diabetes mellitus (13.6% [95% confidence interval, -8.0 to 40.1]). The greatest relative increase in the prevalence of type 2 diabetes mellitus was in Hispanic women (125.2% [95% confidence interval, 84.8-174.4]), followed by African American women (102.0% [95% confidence interval, 38.3-194.3]) and Asian women (93.3% [95% confidence interval, 48.9-150.9]). The prevalence of overall pregestational diabetes mellitus and pregestational type 1 and type 2 diabetes mellitus increased from 1996-1999 to 2012-2014 and racial-ethnic disparities were observed, possibly because of differing prevalence of maternal obesity. Targeted prevention efforts, preconception care, and disease management strategies are needed to reduce the burden of diabetes mellitus and its sequelae. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Statistical properties of four effect-size measures for mediation models.

    PubMed

    Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C

    2018-02-01

    This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.

  8. The use of a physiologically-based extraction test to assess relationships between bioaccessible metals in urban soil and neurodevelopmental conditions in children.

    PubMed

    Hong, Jie; Wang, Yinding; McDermott, Suzanne; Cai, Bo; Aelion, C Marjorie; Lead, Jamie

    2016-05-01

    Intellectual disability (ID) and cerebral palsy (CP) are serious neurodevelopment conditions and low birth weight (LBW) is correlated with both ID and CP. The actual causes and mechanisms for each of these child outcomes are not well understood. In this study, the relationship between bioaccessible metal concentrations in urban soil and these child conditions were investigated. A physiologically based extraction test (PBET) mimicking gastric and intestinal processes was applied to measure the bio-accessibility of four metals (cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb)) in urban soil, and a Bayesian Kriging method was used to estimate metal concentrations in geocoded maternal residential sites. The results showed that bioaccessible metal concentrations of Cd, Ni, and Pb in the intestinal phase were statistically significantly associated with the child outcomes. Lead and nickel were associated with ID, lead and cadmium was associated with LBW, and cadmium was associated with CP. The total concentrations and stomach concentrations were not correlated to significant effects in any of the analyses. For lead, an estimated threshold value was found that was statistically significant in predicting low birth weight. The change point test was statistically significant (p value = 0.045) at an intestine threshold level of 9.2 mg/kg (95% confidence interval 8.9-9.4, p value = 0.0016), which corresponds to 130.6 mg/kg of total Pb concentration in the soil. This is a narrow confidence interval for an important relationship. Published by Elsevier Ltd.

  9. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  10. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    PubMed

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  11. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  12. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  13. Wind power application research on the fusion of the determination and ensemble prediction

    NASA Astrophysics Data System (ADS)

    Lan, Shi; Lina, Xu; Yuzhu, Hao

    2017-07-01

    The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.

  14. Combining statistical inference and decisions in ecology.

    PubMed

    Williams, Perry J; Hooten, Mevin B

    2016-09-01

    Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods, including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem. © 2016 by the Ecological Society of America.

  15. Characterizing reliability in a product/process design-assurance program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerscher, W.J. III; Booker, J.M.; Bement, T.R.

    1997-10-01

    Over the years many advancing techniques in the area of reliability engineering have surfaced in the military sphere of influence, and one of these techniques is Reliability Growth Testing (RGT). Private industry has reviewed RGT as part of the solution to their reliability concerns, but many practical considerations have slowed its implementation. It`s objective is to demonstrate the reliability requirement of a new product with a specified confidence. This paper speaks directly to that objective but discusses a somewhat different approach to achieving it. Rather than conducting testing as a continuum and developing statistical confidence bands around the results, thismore » Bayesian updating approach starts with a reliability estimate characterized by large uncertainty and then proceeds to reduce the uncertainty by folding in fresh information in a Bayesian framework.« less

  16. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme

    NASA Astrophysics Data System (ADS)

    Bakoban, Rana A.

    2017-08-01

    The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.

  17. Bayesian estimation of seasonal course of canopy leaf area index from hyperspectral satellite data

    NASA Astrophysics Data System (ADS)

    Varvia, Petri; Rautiainen, Miina; Seppänen, Aku

    2018-03-01

    In this paper, Bayesian inversion of a physically-based forest reflectance model is investigated to estimate of boreal forest canopy leaf area index (LAI) from EO-1 Hyperion hyperspectral data. The data consist of multiple forest stands with different species compositions and structures, imaged in three phases of the growing season. The Bayesian estimates of canopy LAI are compared to reference estimates based on a spectral vegetation index. The forest reflectance model contains also other unknown variables in addition to LAI, for example leaf single scattering albedo and understory reflectance. In the Bayesian approach, these variables are estimated simultaneously with LAI. The feasibility and seasonal variation of these estimates is also examined. Credible intervals for the estimates are also calculated and evaluated. The results show that the Bayesian inversion approach is significantly better than using a comparable spectral vegetation index regression.

  18. Comparison of Free-Breathing With Navigator-Triggered Technique in Diffusion Weighted Imaging for Evaluation of Small Hepatocellular Carcinoma: Effect on Image Quality and Intravoxel Incoherent Motion Parameters.

    PubMed

    Shan, Yan; Zeng, Meng-su; Liu, Kai; Miao, Xi-Yin; Lin, Jiang; Fu, Cai xia; Xu, Peng-ju

    2015-01-01

    To evaluate the effect on image quality and intravoxel incoherent motion (IVIM) parameters of small hepatocellular carcinoma (HCC) from choice of either free-breathing (FB) or navigator-triggered (NT) diffusion-weighted (DW) imaging. Thirty patients with 37 small HCCs underwent IVIM DW imaging using 12 b values (0-800 s/mm) with 2 sequences: NT, FB. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (f) in small HCCs and liver parenchyma. Apparent diffusion coefficient (ADC) was also calculated. The acquisition time and image quality scores were assessed for 2 sequences. Independent sample t test was used to compare image quality, signal intensity ratio, IVIM parameters, and ADC values between the 2 sequences; reproducibility of IVIM parameters, and ADC values between 2 sequences was assessed with the Bland-Altman method (BA-LA). Image quality with NT sequence was superior to that with FB acquisition (P = 0.02). The mean acquisition time for FB scheme was shorter than that of NT sequence (6 minutes 14 seconds vs 10 minutes 21 seconds ± 10 seconds P < 0.01). The signal intensity ratio of small HCCs did not vary significantly between the 2 sequences. The ADC and IVIM parameters from the 2 sequences show no significant difference. Reproducibility of D*and f parameters in small HCC was poor (BA-LA: 95% confidence interval, -180.8% to 189.2% for D* and -133.8% to 174.9% for f). A moderate reproducibility of D and ADC parameters was observed (BA-LA: 95% confidence interval, -83.5% to 76.8% for D and -74.4% to 88.2% for ADC) between the 2 sequences. The NT DW imaging technique offers no advantage in IVIM parameters measurements of small HCC except better image quality, whereas FB technique offers greater confidence in fitted diffusion parameters for matched acquisition periods.

  19. Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change

    NASA Astrophysics Data System (ADS)

    Field, R.; Constantine, P.; Boslough, M.

    2011-12-01

    We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We applied the calibrated surrogate model to study the probability that the precipitation rate falls below certain thresholds and utilized the Bayesian approach to quantify our confidence in these predictions. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  20. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  1. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…

  2. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  3. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  4. Bootstrapping Confidence Intervals for Robust Measures of Association.

    ERIC Educational Resources Information Center

    King, Jason E.

    A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…

  5. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  6. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  7. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  8. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    PubMed

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  9. Race, Ethnicity, Language, Social Class, and Health Communication Inequalities: A Nationally-Representative Cross-Sectional Study

    PubMed Central

    Viswanath, Kasisomayajula; Ackerson, Leland K.

    2011-01-01

    Background While mass media communications can be an important source of health information, there are substantial social disparities in health knowledge that may be related to media use. The purpose of this study is to investigate how the use of cancer-related health communications is patterned by race, ethnicity, language, and social class. Methodology/Principal Findings In a nationally-representative cross-sectional telephone survey, 5,187 U.S. adults provided information about demographic characteristics, cancer information seeking, and attention to and trust in health information from television, radio, newspaper, magazines, and the Internet. Cancer information seeking was lowest among Spanish-speaking Hispanics (odds ratio: 0.42; 95% confidence interval: 0.28–0.63) compared to non-Hispanic whites. Spanish-speaking Hispanics were more likely than non-Hispanic whites to pay attention to (odds ratio: 3.10; 95% confidence interval: 2.07–4.66) and trust (odds ratio: 2.61; 95% confidence interval: 1.53–4.47) health messages from the radio. Non-Hispanic blacks were more likely than non-Hispanic whites to pay attention to (odds ratio: 2.39; 95% confidence interval: 1.88–3.04) and trust (odds ratio: 2.16; 95% confidence interval: 1.61–2.90) health messages on television. Those who were college graduates tended to pay more attention to health information from newspapers (odds ratio: 1.98; 95% confidence interval: 1.42–2.75), magazines (odds ratio: 1.86; 95% confidence interval: 1.32–2.60), and the Internet (odds ratio: 4.74; 95% confidence interval: 2.70–8.31) and had less trust in cancer-related health information from television (odds ratio: 0.44; 95% confidence interval: 0.32–0.62) and radio (odds ratio: 0.54; 95% confidence interval: 0.34–0.86) compared to those who were not high school graduates. Conclusions/Significance Health media use is patterned by race, ethnicity, language and social class. Providing greater access to and enhancing the quality of health media by taking into account factors associated with social determinants may contribute to addressing social disparities in health. PMID:21267450

  10. Preconceptional and prenatal supplementary folic acid and multivitamin intake and autism spectrum disorders.

    PubMed

    Virk, Jasveer; Liew, Zeyan; Olsen, Jørn; Nohr, Ellen A; Catov, Janet M; Ritz, Beate

    2016-08-01

    To evaluate whether early folic acid supplementation during pregnancy prevents diagnosis of autism spectrum disorders in offspring. Information on autism spectrum disorder diagnosis was obtained from the National Hospital Register and the Central Psychiatric Register. We estimated risk ratios for autism spectrum disorders for children whose mothers took folate or multivitamin supplements from 4 weeks prior from the last menstrual period through to 8 weeks after the last menstrual period (-4 to 8 weeks) by three 4-week periods. We did not find an association between early folate or multivitamin intake for autism spectrum disorder (folic acid-adjusted risk ratio: 1.06, 95% confidence interval: 0.82-1.36; multivitamin-adjusted risk ratio: 1.00, 95% confidence interval: 0.82-1.22), autistic disorder (folic acid-adjusted risk ratio: 1.18, 95% confidence interval: 0.76-1.84; multivitamin-adjusted risk ratio: 1.22, 95% confidence interval: 0.87-1.69), Asperger's syndrome (folic acid-adjusted risk ratio: 0.85, 95% confidence interval: 0.46-1.53; multivitamin-adjusted risk ratio: 0.95, 95% confidence interval: 0.62-1.46), or pervasive developmental disorder-not otherwise specified (folic acid-adjusted risk ratio: 1.07, 95% confidence interval: 0.75-1.54; multivitamin: adjusted risk ratio: 0.87, 95% confidence interval: 0.65-1.17) compared with women reporting no supplement use in the same period. We did not find any evidence to corroborate previous reports of a reduced risk for autism spectrum disorders in offspring of women using folic acid supplements in early pregnancy. © The Author(s) 2015.

  11. Bayesian inference for disease prevalence using negative binomial group testing

    PubMed Central

    Pritchard, Nicholas A.; Tebbs, Joshua M.

    2011-01-01

    Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308

  12. The number of seizures needed in the EMU.

    PubMed

    Struck, Aaron F; Cole, Andrew J; Cash, Sydney S; Westover, M Brandon

    2015-11-01

    The purpose of this study was to develop a quantitative framework to estimate the likelihood of multifocal epilepsy based on the number of unifocal seizures observed in the epilepsy monitoring unit (EMU). Patient records from the EMU at Massachusetts General Hospital (MGH) from 2012 to 2014 were assessed for the presence of multifocal seizures as well the presence of multifocal interictal discharges and multifocal structural imaging abnormalities during the course of the EMU admission. Risk factors for multifocal seizures were assessed using sensitivity and specificity analysis. A Kaplan-Meier survival analysis was used to estimate the risk of multifocal epilepsy for a given number of consecutive seizures. To overcome the limits of the Kaplan-Meier analysis, a parametric survival function was fit to the EMU subjects with multifocal seizures and this was used to develop a Bayesian model to estimate the risk of multifocal seizures during an EMU admission. Multifocal interictal discharges were a significant predictor of multifocal seizures within an EMU admission with a p < 0.01, albeit with only modest sensitivity 0.74 and specificity 0.69. Multifocal potentially epileptogenic lesions on MRI were not a significant predictor p = 0.44. Kaplan-Meier analysis was limited by wide confidence intervals secondary to significant patient dropout and concern for informative censoring. The Bayesian framework provided estimates for the number of unifocal seizures needed to predict absence of multifocal seizures. To achieve 90% confidence for the absence of multifocal seizure, three seizures are needed when the pretest probability for multifocal epilepsy is 20%, seven seizures for a pretest probability of 50%, and nine seizures for a pretest probability of 80%. These results provide a framework to assist clinicians in determining the utility of trying to capture a specific number of seizures in EMU evaluations of candidates for epilepsy surgery. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.

  13. The number of seizures needed in the EMU

    PubMed Central

    Struck, Aaron F.; Cole, Andrew J.; Cash, Sydney S.; Westover, M. Brandon

    2016-01-01

    Summary Objective The purpose of this study was to develop a quantitative framework to estimate the likelihood of multifocal epilepsy based on the number of unifocal seizures observed in the epilepsy monitoring unit (EMU). Methods Patient records from the EMU at Massachusetts General Hospital (MGH) from 2012 to 2014 were assessed for the presence of multifocal seizures as well the presence of multifocal interictal discharges and multifocal structural imaging abnormalities during the course of the EMU admission. Risk factors for multifocal seizures were assessed using sensitivity and specificity analysis. A Kaplan-Meier survival analysis was used to estimate the risk of multifocal epilepsy for a given number of consecutive seizures. To overcome the limits of the Kaplan-Meier analysis, a parametric survival function was fit to the EMU subjects with multifocal seizures and this was used to develop a Bayesian model to estimate the risk of multifocal seizures during an EMU admission. Results Multifocal interictal discharges were a significant predictor of multifocal seizures within an EMU admission with a p < 0.01, albeit with only modest sensitivity 0.74 and specificity 0.69. Multifocal potentially epileptogenic lesions on MRI were not a significant predictor p = 0.44. Kaplan-Meier analysis was limited by wide confidence intervals secondary to significant patient dropout and concern for informative censoring. The Bayesian framework provided estimates for the number of unifocal seizures needed to predict absence of multifocal seizures. To achieve 90% confidence for the absence of multifocal seizure, three seizures are needed when the pretest probability for multifocal epilepsy is 20%, seven seizures for a pretest probability of 50%, and nine seizures for a pretest probability of 80%. Significance These results provide a framework to assist clinicians in determining the utility of trying to capture a specific number of seizures in EMU evaluations of candidates for epilepsy surgery. PMID:26222350

  14. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  15. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  16. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  17. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  18. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  19. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    PubMed

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  20. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  1. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  2. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  3. Maternal and neonatal outcomes of antenatal anemia in a Scottish population: a retrospective cohort study.

    PubMed

    Rukuni, Ruramayi; Bhattacharya, Sohinee; Murphy, Michael F; Roberts, David; Stanworth, Simon J; Knight, Marian

    2016-05-01

    Antenatal anemia is a major public health problem in the UK, yet there is limited high quality evidence for associated poor clinical outcomes. The objectives of this study were to estimate the incidence and clinical outcomes of antenatal anemia in a Scottish population. A retrospective cohort study of 80 422 singleton pregnancies was conducted using data from the Aberdeen Maternal and Neonatal Databank between 1995 and 2012. Antenatal anemia was defined as haemoglobin ≤ 10 g/dl during pregnancy. Incidence was calculated with 95% confidence intervals and compared over time using a chi-squared test for trend. Multivariable logistic regression was used to adjust for confounding variables. Results are presented as adjusted odds ratios with 95% confidence interval. The overall incidence of antenatal anemia was 9.3 cases/100 singleton pregnancies (95% confidence interval 9.1-9.5), decreasing from 16.9/100 to 4.1/100 singleton pregnancies between 1995 and 2012 (p < 0.001). Maternal anemia was associated with antepartum hemorrhage (adjusted odds ratio 1.26, 95% confidence interval 1.17-1.36), postpartum infection (adjusted odds ratio 1.89, 95% confidence interval 1.39-2.57), transfusion (adjusted odds ratio 1.87, 95% confidence interval 1.65-2.13) and stillbirth (adjusted odds ratio 1.42, 95% confidence interval 1.04-1.94), reduced odds of postpartum hemorrhage (adjusted odds ratio 0.92, 95% confidence interval 0.86-0.98) and low birthweight (adjusted odds ratio 0.77, 95% confidence interval 0.69-0.86). No other outcomes were statistically significant. This study shows the incidence of antenatal anemia is decreasing steadily within this Scottish population. However, given that anemia is a readily correctable risk factor for major causes of morbidity and mortality in the UK, further work is required to investigate appropriate preventive measures. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  4. Opioid analgesia in mechanically ventilated children: results from the multicenter Measuring Opioid Tolerance Induced by Fentanyl study.

    PubMed

    Anand, Kanwaljeet J S; Clark, Amy E; Willson, Douglas F; Berger, John; Meert, Kathleen L; Zimmerman, Jerry J; Harrison, Rick; Carcillo, Joseph A; Newth, Christopher J L; Bisping, Stephanie; Holubkov, Richard; Dean, J Michael; Nicholson, Carol E

    2013-01-01

    To examine the clinical factors associated with increased opioid dose among mechanically ventilated children in the pediatric intensive care unit. Prospective, observational study with 100% accrual of eligible patients. Seven pediatric intensive care units from tertiary-care children's hospitals in the Collaborative Pediatric Critical Care Research Network. Four hundred nineteen children treated with morphine or fentanyl infusions. None. Data on opioid use, concomitant therapy, demographic and explanatory variables were collected. Significant variability occurred in clinical practices, with up to 100-fold differences in baseline opioid doses, average daily or total doses, or peak infusion rates. Opioid exposure for 7 or 14 days required doubling of the daily opioid dose in 16% patients (95% confidence interval 12%-19%) and 20% patients (95% confidence interval 16%-24%), respectively. Among patients receiving opioids for longer than 3 days (n = 225), this occurred in 28% (95% confidence interval 22%-33%) and 35% (95% confidence interval 29%-41%) by 7 or 14 days, respectively. Doubling of the opioid dose was more likely to occur following opioid infusions for 7 days or longer (odds ratio 7.9, 95% confidence interval 4.3-14.3; p < 0.001) or co-therapy with midazolam (odds ratio 5.6, 95% confidence interval 2.4-12.9; p < 0.001), and it was less likely to occur if morphine was used as the primary opioid (vs. fentanyl) (odds ratio 0.48, 95% confidence interval 0.25-0.92; p = 0.03), for patients receiving higher initial doses (odds ratio 0.96, 95% confidence interval 0.95-0.98; p < 0.001), or if patients had prior pediatric intensive care unit admissions (odds ratio 0.37, 95% confidence interval 0.15-0.89; p = 0.03). Mechanically ventilated children require increasing opioid doses, often associated with prolonged opioid exposure or the need for additional sedation. Efforts to reduce prolonged opioid exposure and clinical practice variation may prevent the complications of opioid therapy.

  5. Spatial Distribution of the Coefficient of Variation and Bayesian Forecast for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, Shunichi; Ogata, Yosihiko

    2016-04-01

    We propose a Bayesian method of probability forecasting for recurrent earthquakes of inland active faults in Japan. Renewal processes with the Brownian Passage Time (BPT) distribution are applied for over a half of active faults in Japan by the Headquarters for Earthquake Research Promotion (HERP) of Japan. Long-term forecast with the BPT distribution needs two parameters; the mean and coefficient of variation (COV) for recurrence intervals. The HERP applies a common COV parameter for all of these faults because most of them have very few specified paleoseismic events, which is not enough to estimate reliable COV values for respective faults. However, different COV estimates are proposed for the same paleoseismic catalog by some related works. It can make critical difference in forecast to apply different COV estimates and so COV should be carefully selected for individual faults. Recurrence intervals on a fault are, on the average, determined by the long-term slip rate caused by the tectonic motion but fluctuated by nearby seismicities which influence surrounding stress field. The COVs of recurrence intervals depend on such stress perturbation and so have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus we introduce a spatial structure on its COV parameter by Bayesian modeling with a Gaussian process prior. The COVs on active faults are correlated and take similar values for closely located faults. It is found that the spatial trends in the estimated COV values coincide with the density of active faults in Japan. We also show Bayesian forecasts by the proposed model using Markov chain Monte Carlo method. Our forecasts are different from HERP's forecast especially on the active faults where HERP's forecasts are very high or low.

  6. A Bayesian Attractor Model for Perceptual Decision Making

    PubMed Central

    Bitzer, Sebastian; Bruineberg, Jelle; Kiebel, Stefan J.

    2015-01-01

    Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks. PMID:26267143

  7. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    ERIC Educational Resources Information Center

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…

  8. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

    ERIC Educational Resources Information Center

    Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

    2012-01-01

    The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

  9. Using Asymptotic Results to Obtain a Confidence Interval for the Population Median

    ERIC Educational Resources Information Center

    Jamshidian, M.; Khatoonabadi, M.

    2007-01-01

    Almost all introductory and intermediate level statistics textbooks include the topic of confidence interval for the population mean. Almost all these texts introduce the median as a robust measure of central tendency. Only a few of these books, however, cover inference on the population median and in particular confidence interval for the median.…

  10. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  11. Confidence intervals from single observations in forest research

    Treesearch

    Harry T. Valentine; George M. Furnival; Timothy G. Gregoire

    1991-01-01

    A procedure for constructing confidence intervals and testing hypothese from a single trial or observation is reviewed. The procedure requires a prior, fixed estimate or guess of the outcome of an experiment or sampling. Two examples of applications are described: a confidence interval is constructed for the expected outcome of a systematic sampling of a forested tract...

  12. Excellent amino acid racemization results from Holocene sand dollars

    NASA Astrophysics Data System (ADS)

    Kosnik, M.; Kaufman, D. S.; Kowalewski, M.; Whitacre, K.

    2015-12-01

    Amino acid racemization (AAR) is widely used as a cost-effective method to date molluscs in time-averaging and taphonomic studies, but it has not been attempted for echinoderms despite their paleobiological importance. Here we demonstrate the feasibility of AAR geochronology in Holocene aged Peronella peronii (Echinodermata: Echinoidea) collected from Sydney Harbour (Australia). Using standard HPLC methods we determined the extent of AAR in 74 Peronella tests and performed replicate analyses on 18 tests. We sampled multiple areas of two individuals and identified the outer edge as a good sampling location. Multiple replicate analyses from the outer edge of 18 tests spanning the observed range of D/Ls yielded median coefficients of variation < 4% for Asp, Phe, Ala, and Glu D/L values, which overlaps with the analytical precision. Correlations between D/L values across 155 HPLC injections sampled from 74 individuals are also very high (pearson r2 > 0.95) for these four amino acids. The ages of 11 individuals spanning the observed range of D/L values were determined using 14C analyses, and Bayesian model averaging was used to determine the best AAR age model. The averaged age model was mainly composed of time-dependent reaction kinetics models (TDK, 71%) based on phenylalanine (Phe, 94%). Modelled ages ranged from 14 to 5539 yrs, and the median 95% confidence interval for the 74 analysed individuals is ±28% of the modelled age. In comparison, the median 95% confidence interval for the 11 calibrated 14C ages was ±9% of the median age estimate. Overall Peronella yields exceptionally high-quality AAR D/L values and appears to be an excellent substrate for AAR geochronology. This work opens the way for time-averaging and taphonomic studies of echinoderms similar to those in molluscs.

  13. The hepatitis C virus nonstructural protein 3 Q80K polymorphism is frequently detected and transmitted among HIV-infected MSM in the Netherlands.

    PubMed

    Newsum, Astrid M; Ho, Cynthia K Y; Lieveld, Faydra I; van de Laar, Thijs J W; Koekkoek, Sylvie M; Rebers, Sjoerd P; van der Meer, Jan T M; Wensing, Anne M J; Boland, Greet J; Arends, Joop E; van Erpecum, Karel J; Prins, Maria; Molenkamp, Richard; Schinkel, Janke

    2017-01-02

    The Q80K polymorphism is a naturally occurring resistance-associated variant in the hepatitis C virus (HCV) nonstructural protein 3 (NS3) region and is likely transmissible between hosts. This study describes the Q80K origin and prevalence among HCV risk groups in the Netherlands and examines whether Q80K is linked to specific transmission networks. Stored blood samples from HCV genotype 1a-infected patients were used for PCR and sequencing to reconstruct the NS3 maximum likelihood phylogeny. The most recent common ancestor was estimated with a coalescent-based model within a Bayesian statistical framework. Study participants (n = 150) were either MSM (39%), people who inject drugs (17%), or patients with other (15%) or unknown/unreported (29%) risk behavior. Overall 45% was coinfected with HIV. Q80K was present in 36% (95% confidence interval 28-44%) of patients throughout the sample collection period (2000-2015) and was most prevalent in MSM (52%, 95% confidence interval 38-65%). Five MSM-specific transmission clusters were identified, of which three exclusively contained sequences with Q80K. The HCV-1a most recent common ancestor in the Netherlands was estimated in 1914 (95% higher posterior density 1879-1944) and Q80K originated in 1957 (95% higher posterior density 1942-1970) within HCV-1a clade I. All Q80K lineages could be traced back to this single origin. Q80K is a highly stable and transmissible resistance-associated variant and was present in a large part of Dutch HIV-coinfected MSM. The introduction and expansion of Q80K variants in this key population suggest a founder effect, potentially jeopardizing future treatment with simeprevir.

  14. Vegetation Monitoring with Gaussian Processes and Latent Force Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David

    2017-04-01

    Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.

  15. Estimation of the latent mediated effect with ordinal data using the limited-information and Bayesian full-information approaches.

    PubMed

    Chen, Jinsong; Zhang, Dake; Choi, Jaehwa

    2015-12-01

    It is common to encounter latent variables with ordinal data in social or behavioral research. Although a mediated effect of latent variables (latent mediated effect, or LME) with ordinal data may appear to be a straightforward combination of LME with continuous data and latent variables with ordinal data, the methodological challenges to combine the two are not trivial. This research covers model structures as complex as LME and formulates both point and interval estimates of LME for ordinal data using the Bayesian full-information approach. We also combine weighted least squares (WLS) estimation with the bias-corrected bootstrapping (BCB; Efron Journal of the American Statistical Association, 82, 171-185, 1987) method or the traditional delta method as the limited-information approach. We evaluated the viability of these different approaches across various conditions through simulation studies, and provide an empirical example to illustrate the approaches. We found that the Bayesian approach with reasonably informative priors is preferred when both point and interval estimates are of interest and the sample size is 200 or above.

  16. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    PubMed

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Depressive symptoms in nonresident african american fathers and involvement with their sons.

    PubMed

    Davis, R Neal; Caldwell, Cleopatra Howard; Clark, Sarah J; Davis, Matthew M

    2009-12-01

    Our objective was to determine whether paternal depressive symptoms were associated with less father involvement among African American fathers not living with their children (ie, nonresident fathers). We analyzed survey data for 345 fathers enrolled in a program for nonresident African American fathers and their preteen sons. Father involvement included measures of contact, closeness, monitoring, communication, and conflict. We used bivariate analyses and multivariate logistic regression analysis to examine associations between father involvement and depressive symptoms. Thirty-six percent of fathers reported moderate depressive symptoms, and 11% reported severe depressive symptoms. In bivariate analyses, depressive symptoms were associated with less contact, less closeness, low monitoring, and increased conflict. In multivariate analyses controlling for basic demographic features, fathers with moderate depressive symptoms were more likely to have less contact (adjusted odds ratio: 1.7 [95% confidence interval: 1.1-2.8]), less closeness (adjusted odds ratio: 2.1 [95% confidence interval: 1.3-3.5]), low monitoring (adjusted odds ratio: 2.7 [95% confidence interval: 1.4-5.2]), and high conflict (adjusted odds ratio: 2.1 [95% confidence interval: 1.2-3.6]). Fathers with severe depressive symptoms also were more likely to have less contact (adjusted odds ratio: 3.1 [95% confidence interval: 1.4-7.2]), less closeness (adjusted odds ratio: 2.6 [95% confidence interval: 1.2-5.7]), low monitoring (adjusted odds ratio: 2.8 [95% confidence interval: 1.1-7.1]), and high conflict (adjusted odds ratio: 2.6 [95% confidence interval: 1.1-5.9]). Paternal depressive symptoms may be an important, but modifiable, barrier for nonresident African American fathers willing to be more involved with their children.

  18. Ethnic variations in morbidity and mortality from lower respiratory tract infections: a retrospective cohort study.

    PubMed

    Simpson, Colin R; Steiner, Markus Fc; Cezard, Genevieve; Bansal, Narinder; Fischbacher, Colin; Douglas, Anne; Bhopal, Raj; Sheikh, Aziz

    2015-10-01

    There is evidence of substantial ethnic variations in asthma morbidity and the risk of hospitalisation, but the picture in relation to lower respiratory tract infections is unclear. We carried out an observational study to identify ethnic group differences for lower respiratory tract infections. A retrospective, cohort study. Scotland. 4.65 million people on whom information was available from the 2001 census, followed from May 2001 to April 2010. Hospitalisations and deaths (any time following first hospitalisation) from lower respiratory tract infections, adjusted risk ratios and hazard ratios by ethnicity and sex were calculated. We multiplied ratios and confidence intervals by 100, so the reference Scottish White population's risk ratio and hazard ratio was 100. Among men, adjusted risk ratios for lower respiratory tract infection hospitalisation were lower in Other White British (80, 95% confidence interval 73-86) and Chinese (69, 95% confidence interval 56-84) populations and higher in Pakistani groups (152, 95% confidence interval 136-169). In women, results were mostly similar to those in men (e.g. Chinese 68, 95% confidence interval 56-82), although higher adjusted risk ratios were found among women of the Other South Asians group (145, 95% confidence interval 120-175). Survival (adjusted hazard ratio) following lower respiratory tract infection for Pakistani men (54, 95% confidence interval 39-74) and women (31, 95% confidence interval 18-53) was better than the reference population. Substantial differences in the rates of lower respiratory tract infections amongst different ethnic groups in Scotland were found. Pakistani men and women had particularly high rates of lower respiratory tract infection hospitalisation. The reasons behind the high rates of lower respiratory tract infection in the Pakistani community are now required. © The Royal Society of Medicine.

  19. Risk factors of childhood asthma in children attending Lyari General Hospital.

    PubMed

    Kamran, Amber; Hanif, Shahina; Murtaza, Ghulam

    2015-06-01

    To determine the factors associated with asthma in children. The case-control study was conducted in the paediatrics clinic of Lyari General Hospital, Karachi, from May to October 2010. Children 1-15 years of age attending the clinic represented the cases, while the control group had children who were closely related (sibling or cousin) to the cases but did not have the symptoms of disease at the time. Data was collected through a proforma and analysed using SPSS 10. Of the total 346 subjects, 173(50%) each comprised the two groups. According to univariable analysis the risk factors were presence of at least one smoker (odds ratio: 3.6; 95% confidence interval: 2.3-5.8), resident of kacha house (odds ratio: 16.2; 95% confidence interval: 3.8-69.5),living in room without windows (odds ratio: 9.3; 95% confidence interval: 2.1-40.9) and living in houses without adequate sunlight (odds ratio: 1.6; 95% confidence interval: 1.2-2.4).Using multivariable modelling, family history of asthma (odds ratio: 5.9; 95% confidence interval: 3.1-11.6), presence of at least one smoker at home (odds ratio: 4.1; 95% confidence interval: 2.3-7.2), people living in a room without a window (odds ratio: 5.5; 95% confidence interval: 1.15-26.3) and people living in an area without adequate sunlight (odds ratio: 2.2; 95% confidence interval: 1.13-4.31) were found to be independent risk factors of asthma in children adjusting for age, gender and history of weaning. Family history of asthma, children living with at least one smoker at home, room without windows and people living in an area without sunlight were major risk factors of childhood asthma.

  20. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  1. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  2. A predictive Bayesian approach to the design and analysis of bridging studies.

    PubMed

    Gould, A Lawrence; Jin, Tian; Zhang, Li Xin; Wang, William W B

    2012-09-01

    Pharmaceutical product development culminates in confirmatory trials whose evidence for the product's efficacy and safety supports regulatory approval for marketing. Regulatory agencies in countries whose patients were not included in the confirmatory trials often require confirmation of efficacy and safety in their patient populations, which may be accomplished by carrying out bridging studies to establish consistency for local patients of the effects demonstrated by the original trials. This article describes and illustrates an approach for designing and analyzing bridging studies that fully incorporates the information provided by the original trials. The approach determines probability contours or regions of joint predictive intervals for treatment effect and response variability, or endpoints of treatment effect confidence intervals, that are functions of the findings from the original trials, the sample sizes for the bridging studies, and possible deviations from complete consistency with the original trials. The bridging studies are judged consistent with the original trials if their findings fall within the probability contours or regions. Regulatory considerations determine the region definitions and appropriate probability levels. Producer and consumer risks provide a way to assess alternative region and probability choices. [Supplemental materials are available for this article. Go to the Publisher's online edition of the Journal of Biopharmaceutical Statistics for the following free supplemental resource: Appendix 2: R code for Calculations.].

  3. Meta-analysis of few small studies in orphan diseases.

    PubMed

    Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat

    2017-03-01

    Meta-analyses in orphan diseases and small populations generally face particular problems, including small numbers of studies, small study sizes and heterogeneity of results. However, the heterogeneity is difficult to estimate if only very few studies are included. Motivated by a systematic review in immunosuppression following liver transplantation in children, we investigate the properties of a range of commonly used frequentist and Bayesian procedures in simulation studies. Furthermore, the consequences for interval estimation of the common treatment effect in random-effects meta-analysis are assessed. The Bayesian credibility intervals using weakly informative priors for the between-trial heterogeneity exhibited coverage probabilities in excess of the nominal level for a range of scenarios considered. However, they tended to be shorter than those obtained by the Knapp-Hartung method, which were also conservative. In contrast, methods based on normal quantiles exhibited coverages well below the nominal levels in many scenarios. With very few studies, the performance of the Bayesian credibility intervals is of course sensitive to the specification of the prior for the between-trial heterogeneity. In conclusion, the use of weakly informative priors as exemplified by half-normal priors (with a scale of 0.5 or 1.0) for log odds ratios is recommended for applications in rare diseases. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  4. Risk factors for low birth weight according to the multiple logistic regression model. A retrospective cohort study in José María Morelos municipality, Quintana Roo, Mexico.

    PubMed

    Franco Monsreal, José; Tun Cobos, Miriam Del Ruby; Hernández Gómez, José Ricardo; Serralta Peraza, Lidia Esther Del Socorro

    2018-01-17

    Low birth weight has been an enigma for science over time. There have been many researches on its causes and its effects. Low birth weight is an indicator that predicts the probability of a child surviving. In fact, there is an exponential relationship between weight deficit, gestational age, and perinatal mortality. Multiple logistic regression is one of the most expressive and versatile statistical instruments available for the analysis of data in both clinical and epidemiology settings, as well as in public health. To assess in a multivariate fashion the importance of 17 independent variables in low birth weight (dependent variable) of children born in the Mayan municipality of José María Morelos, Quintana Roo, Mexico. Analytical observational epidemiological cohort study with retrospective temporality. Births that met the inclusion criteria occurred in the "Hospital Integral Jose Maria Morelos" of the Ministry of Health corresponding to the Maya municipality of Jose Maria Morelos during the period from August 1, 2014 to July 31, 2015. The total number of newborns recorded was 1,147; 84 of which (7.32%) had low birth weight. To estimate the independent association between the explanatory variables (potential risk factors) and the response variable, a multiple logistic regression analysis was performed using the IBM SPSS Statistics 22 software. In ascending numerical order values of odds ratio > 1 indicated the positive contribution of explanatory variables or possible risk factors: "unmarried" marital status (1.076, 95% confidence interval: 0.550 to 2.104); age at menarche ≤ 12 years (1.08, 95% confidence interval: 0.64 to 1.84); history of abortion(s) (1.14, 95% confidence interval: 0.44 to 2.93); maternal weight < 50 kg (1.51, 95% confidence interval: 0.83 to 2.76); number of prenatal consultations ≤ 5 (1.86, 95% confidence interval: 0.94 to 3.66); maternal age ≥ 36 years (3.5, 95% confidence interval: 0.40 to 30.47); maternal age ≤ 19 years (3.59, 95% confidence interval: 0.43 to 29.87); number of deliveries = 1 (3.86, 95% confidence interval: 0.33 to 44.85); personal pathological history (4.78, 95% confidence interval: 2.16 to 10.59); pathological obstetric history (5.01, 95% confidence interval: 1.66 to 15.18); maternal height < 150 cm (5.16, 95% confidence interval: 3.08 to 8.65); number of births ≥ 5 (5.99, 95% confidence interval: 0.51 to 69.99); and smoking (15.63, 95% confidence interval: 1.07 to 227.97). Four of the independent variables (personal pathological history, obstetric pathological history, maternal stature <150 centimeters and smoking) showed a significant positive contribution, thus they can be considered as clear risk factors for low birth weight. The use of the logistic regression model in the Mayan municipality of José María Morelos, will allow estimating the probability of low birth weight for each pregnant woman in the future, which will be useful for the health authorities of the region.

  5. Small area variation in diabetes prevalence in Puerto Rico

    PubMed Central

    Tierney, Edward F.; Burrows, Nilka R.; Barker, Lawrence E.; Beckles, Gloria L.; Boyle, James P.; Cadwell, Betsy L.; Kirtland, Karen A.; Thompson, Theodore J.

    2015-01-01

    Objective To estimate the 2009 prevalence of diagnosed diabetes in Puerto Rico among adults ≥ 20 years of age in order to gain a better understanding of its geographic distribution so that policymakers can more efficiently target prevention and control programs. Methods A Bayesian multilevel model was fitted to the combined 2008–2010 Behavioral Risk Factor Surveillance System and 2009 United States Census data to estimate diabetes prevalence for each of the 78 municipios (counties) in Puerto Rico. Results The mean unadjusted estimate for all counties was 14.3% (range by county, 9.9%–18.0%). The average width of the confidence intervals was 6.2%. Adjusted and unadjusted estimates differed little. Conclusions These 78 county estimates are higher on average and showed less variability (i.e., had a smaller range) than the previously published estimates of the 2008 diabetes prevalence for all United States counties (mean, 9.9%; range, 3.0%–18.2%). PMID:23939364

  6. Combining evidence using likelihood ratios in writer verification

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory

    2013-01-01

    Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.

  7. Modeling T-cell activation using gene expression profiling and state-space models.

    PubMed

    Rangel, Claudia; Angus, John; Ghahramani, Zoubin; Lioumi, Maria; Sotheran, Elizabeth; Gaiba, Alessia; Wild, David L; Falciani, Francesco

    2004-06-12

    We have used state-space models to reverse engineer transcriptional networks from highly replicated gene expression profiling time series data obtained from a well-established model of T-cell activation. State space models are a class of dynamic Bayesian networks that assume that the observed measurements depend on some hidden state variables that evolve according to Markovian dynamics. These hidden variables can capture effects that cannot be measured in a gene expression profiling experiment, e.g. genes that have not been included in the microarray, levels of regulatory proteins, the effects of messenger RNA and protein degradation, etc. Bootstrap confidence intervals are developed for parameters representing 'gene-gene' interactions over time. Our models represent the dynamics of T-cell activation and provide a methodology for the development of rational and experimentally testable hypotheses. Supplementary data and Matlab computer source code will be made available on the web at the URL given below. http://public.kgi.edu/~wild/LDS/index.htm

  8. Genome-wide QTL mapping of saltwater tolerance in sibling species of Anopheles (malaria vector) mosquitoes

    PubMed Central

    Smith, H A; White, B J; Kundert, P; Cheng, C; Romero-Severson, J; Andolfatto, P; Besansky, N J

    2015-01-01

    Although freshwater (FW) is the ancestral habitat for larval mosquitoes, multiple species independently evolved the ability to survive in saltwater (SW). Here, we use quantitative trait locus (QTL) mapping to investigate the genetic architecture of osmoregulation in Anopheles mosquitoes, vectors of human malaria. We analyzed 1134 backcross progeny from a cross between the obligate FW species An. coluzzii, and its closely related euryhaline sibling species An. merus. Tests of 2387 markers with Bayesian interval mapping and machine learning (random forests) yielded six genomic regions associated with SW tolerance. Overlap in QTL regions from both approaches enhances confidence in QTL identification. Evidence exists for synergistic as well as disruptive epistasis among loci. Intriguingly, one QTL region containing ion transporters spans the 2Rop chromosomal inversion that distinguishes these species. Rather than a simple trait controlled by one or a few loci, our data are most consistent with a complex, polygenic mode of inheritance. PMID:25920668

  9. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  10. Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…

  11. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2012-01-01

    Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

  12. Statistical surrogate models for prediction of high-consequence climate change.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantine, Paul; Field, Richard V., Jr.; Boslough, Mark Bruce Elrick

    2011-09-01

    In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest.more » A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the Program for Climate Model Diagnosis and Intercomparison (PCMDI), or to a collection of outputs from a General Circulation Model (GCM), e.g., the Community Earth System Model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.« less

  13. Investigating the Effect of Social Changes on Age-Specific Gun-Related Homicide Rates in New York City During the 1990s

    PubMed Central

    Messner, Steven F.; Tracy, Melissa; Vlahov, David; Goldmann, Emily; Tardiff, Kenneth J.; Galea, Sandro

    2010-01-01

    Objectives. We assessed whether New York City's gun-related homicide rates in the 1990s were associated with a range of social determinants of homicide rates. Methods. We used cross-sectional time-series data for 74 New York City police precincts from 1990 through 1999, and we estimated Bayesian hierarchical models with a spatial error term. Homicide rates were estimated separately for victims aged 15–24 years (youths), 25–34 years (young adults), and 35 years or older (adults). Results. Decreased cocaine consumption was associated with declining homicide rates in youths (posterior median [PM] = 0.25; 95% Bayesian confidence interval [BCI] = 0.07, 0.45) and adults (PM = 0.07; 95% BCI = 0.02, 0.12), and declining alcohol consumption was associated with fewer homicides in young adults (PM = 0.14; 95% BCI = 0.02, 0.25). Receipt of public assistance was associated with fewer homicides for young adults (PM = –104.20; 95% BCI = –182.0, –26.14) and adults (PM = –28.76; 95% BCI = –52.65, –5.01). Misdemeanor policing was associated with fewer homicides in adults (PM = –0.01; 95% BCI = –0.02, –0.001). Conclusions. Substance use prevention policies and expansion of the social safety net may be able to cause major reductions in homicide among age groups that drive city homicide trends. PMID:20395590

  14. Flood quantile estimation at ungauged sites by Bayesian networks

    NASA Astrophysics Data System (ADS)

    Mediero, L.; Santillán, D.; Garrote, L.

    2012-04-01

    Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a stochastic generator of synthetic data was developed. Synthetic basin characteristics were randomised, keeping the statistical properties of observed physical and climatic variables in the homogeneous region. The synthetic flood quantiles were stochastically generated taking the regression equation as basis. The learnt Bayesian network was validated by the reliability diagram, the Brier Score and the ROC diagram, which are common measures used in the validation of probabilistic forecasts. Summarising, the flood quantile estimations through Bayesian networks supply information about the prediction uncertainty as a probability distribution function of discharges is given as result. Therefore, the Bayesian network model has application as a decision support for water resources and planning management.

  15. Patient, surgeon, and hospital disparities associated with benign hysterectomy approach and perioperative complications.

    PubMed

    Mehta, Ambar; Xu, Tim; Hutfless, Susan; Makary, Martin A; Sinno, Abdulrahman K; Tanner, Edward J; Stone, Rebecca L; Wang, Karen; Fader, Amanda N

    2017-05-01

    Hysterectomy is among the most common major surgical procedures performed in women. Approximately 450,000 hysterectomy procedures are performed each year in the United States for benign indications. However, little is known regarding contemporary US hysterectomy trends for women with benign disease with respect to operative technique and perioperative complications, and the association between these 2 factors with patient, surgeon, and hospital characteristics. We sought to describe contemporary hysterectomy trends and explore associations between patient, surgeon, and hospital characteristics with surgical approach and perioperative complications. Hysterectomies performed for benign indications by general gynecologists from July 2012 through September 2014 were analyzed in the all-payer Maryland Health Services Cost Review Commission database. We excluded hysterectomies performed by gynecologic oncologists, reproductive endocrinologists, and female pelvic medicine and reconstructive surgeons. We included both open hysterectomies and those performed by minimally invasive surgery, which included vaginal hysterectomies. Perioperative complications were defined using the Agency for Healthcare Research and Quality patient safety indicators. Surgeon hysterectomy volume during the 2-year study period was analyzed (0-5 cases annually = very low, 6-10 = low, 11-20 = medium, and ≥21 = high). We utilized logistic regression and negative binomial regression to identify patient, surgeon, and hospital characteristics associated with minimally invasive surgery utilization and perioperative complications, respectively. A total of 5660 hospitalizations were identified during the study period. Most patients (61.5%) had an open hysterectomy; 38.5% underwent a minimally invasive surgery procedure (25.1% robotic, 46.6% laparoscopic, 28.3% vaginal). Most surgeons (68.2%) were very low- or low-volume surgeons. Factors associated with a lower likelihood of undergoing minimally invasive surgery included older patient age (reference 45-64 years; 20-44 years: adjusted odds ratio, 1.16; 95% confidence interval, 1.05-1.28), black race (reference white; adjusted odds ratio, 0.70; 95% confidence interval, 0.63-0.78), Hispanic ethnicity (adjusted odds ratio, 0.62; 95% confidence interval, 0.48-0.80), smaller hospital (reference large; small: adjusted odds ratio, 0.26; 95% confidence interval, 0.15-0.45; medium: adjusted odds ratio, 0.87; 95% confidence interval, 0.79-0.96), medium hospital hysterectomy volume (reference ≥200 hysterectomies; 100-200: adjusted odds ratio, 0.78; 95% confidence interval, 0.71-0.87), and medium vs high surgeon volume (reference high; medium: adjusted odds ratio, 0.87; 95% confidence interval, 0.78-0.97). Complications occurred in 25.8% of open and 8.2% of minimally invasive hysterectomies (P < .0001). Minimally invasive hysterectomy (adjusted odds ratio, 0.22; 95% confidence interval, 0.17-0.27) and large hysterectomy volume hospitals (reference ≥200 hysterectomies; 1-100: adjusted odds ratio, 2.26; 95% confidence interval, 1.60-3.20; 101-200: adjusted odds ratio, 1.63; 95% confidence interval, 1.23-2.16) were associated with fewer complications, while patient payer, including Medicare (reference private; adjusted odds ratio, 1.86; 95% confidence interval, 1.33-2.61), Medicaid (adjusted odds ratio, 1.63; 95% confidence interval, 1.30-2.04), and self-pay status (adjusted odds ratio, 2.41; 95% confidence interval, 1.40-4.12), and very-low and low surgeon hysterectomy volume (reference ≥21 cases; 1-5 cases: adjusted odds ratio, 1.73; 95% confidence interval, 1.22-2.47; 6-10 cases: adjusted odds ratio, 1.60; 95% confidence interval, 1.11-2.23) were associated with perioperative complications. Use of minimally invasive hysterectomy for benign indications remains variable, with most patients undergoing open, more morbid procedures. Older and black patients and smaller hospitals are associated with open hysterectomy. Patient race and payer status, hysterectomy approach, and surgeon volume were associated with perioperative complications. Hysterectomies performed for benign indications by high-volume surgeons or by minimally invasive techniques may represent an opportunity to reduce preventable harm. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Opioid Analgesics and Adverse Outcomes among Hemodialysis Patients.

    PubMed

    Ishida, Julie H; McCulloch, Charles E; Steinman, Michael A; Grimes, Barbara A; Johansen, Kirsten L

    2018-05-07

    Patients on hemodialysis frequently experience pain and may be particularly vulnerable to opioid-related complications. However, data evaluating the risks of opioid use in patients on hemodialysis are limited. Using the US Renal Data System, we conducted a cohort study evaluating the association between opioid use (modeled as a time-varying exposure and expressed in standardized oral morphine equivalents) and time to first emergency room visit or hospitalization for altered mental status, fall, and fracture among 140,899 Medicare-covered adults receiving hemodialysis in 2011. We evaluated risk according to average daily total opioid dose (>60 mg, ≤60 mg, and per 60-mg dose increment) and specific agents (per 60-mg dose increment). The median age was 61 years old, 52% were men, and 50% were white. Sixty-four percent received opioids, and 17% had an episode of altered mental status (15,658 events), fall (7646 events), or fracture (4151 events) in 2011. Opioid use was associated with risk for all outcomes in a dose-dependent manner: altered mental status (lower dose: hazard ratio, 1.28; 95% confidence interval, 1.23 to 1.34; higher dose: hazard ratio, 1.67; 95% confidence interval, 1.56 to 1.78; hazard ratio, 1.29 per 60 mg; 95% confidence interval, 1.26 to 1.33), fall (lower dose: hazard ratio, 1.28; 95% confidence interval, 1.21 to 1.36; higher dose: hazard ratio, 1.45; 95% confidence interval, 1.31 to 1.61; hazard ratio, 1.04 per 60 mg; 95% confidence interval, 1.03 to 1.05), and fracture (lower dose: hazard ratio, 1.44; 95% confidence interval, 1.33 to 1.56; higher dose: hazard ratio, 1.65; 95% confidence interval, 1.44 to 1.89; hazard ratio, 1.04 per 60 mg; 95% confidence interval, 1.04 to 1.05). All agents were associated with a significantly higher hazard of altered mental status, and several agents were associated with a significantly higher hazard of fall and fracture. Opioids were associated with adverse outcomes in patients on hemodialysis, and this risk was present even at lower dosing and for agents that guidelines have recommended for use. Copyright © 2018 by the American Society of Nephrology.

  17. Pregnancy outcome in joint hypermobility syndrome and Ehlers-Danlos syndrome.

    PubMed

    Sundelin, Heléne E K; Stephansson, Olof; Johansson, Kari; Ludvigsson, Jonas F

    2017-01-01

    An increased risk of preterm birth in women with joint hypermobility syndrome or Ehlers-Danlos syndrome is suspected. In this nationwide cohort study from 1997 through 2011, women with either joint hypermobility syndrome or Ehlers-Danlos syndrome or both disorders were identified through the Swedish Patient Register, and linked to the Medical Birth Register. Thereby, 314 singleton births to women with joint hypermobility syndrome/Ehlers-Danlos syndrome before delivery were identified. These births were compared with 1 247 864 singleton births to women without a diagnosis of joint hypermobility syndrome/Ehlers-Danlos syndrome. We used logistic regression, adjusted for maternal age, smoking, parity, and year of birth, to calculate adjusted odds ratios for adverse pregnancy outcomes. Maternal joint hypermobility syndrome/Ehlers-Danlos syndrome was not associated with any of our outcomes: preterm birth (adjusted odds ratio = 0.6, 95% confidence interval 0.3-1.2), preterm premature rupture of membranes (adjusted odds ratio = 0.8; 95% confidence interval 0.3-2.2), cesarean section (adjusted odds ratio = 0.9, 95% confidence interval 0.7-1.2), stillbirth (adjusted odds ratio = 1.1, 95% confidence interval 0.2-7.9), low Apgar score (adjusted odds ratio = 1.6, 95% confidence interval 0.7-3.6), small for gestational age (adjusted odds ratio = 0.9, 95% confidence interval 0.4-1.8) or large for gestational age (adjusted odds ratio = 1.2, 95% confidence interval 0.6-2.1). Examining only women with Ehlers-Danlos syndrome (n = 62), we found a higher risk of induction of labor (adjusted odds ratio = 2.6; 95% confidence interval 1.4-4.6) and amniotomy (adjusted odds ratio = 3.8; 95% confidence interval 2.0-7.1). No excess risks for adverse pregnancy outcome were seen in joint hypermobility syndrome. Women with joint hypermobility syndrome/Ehlers-Danlos syndrome do not seem to be at increased risk of adverse pregnancy outcome. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  18. Comprehension of confidence intervals - development and piloting of patient information materials for people with multiple sclerosis: qualitative study and pilot randomised controlled trial.

    PubMed

    Rahn, Anne C; Backhus, Imke; Fuest, Franz; Riemann-Lorenz, Karin; Köpke, Sascha; van de Roemer, Adrianus; Mühlhauser, Ingrid; Heesen, Christoph

    2016-09-20

    Presentation of confidence intervals alongside information about treatment effects can support informed treatment choices in people with multiple sclerosis. We aimed to develop and pilot-test different written patient information materials explaining confidence intervals in people with relapsing-remitting multiple sclerosis. Further, a questionnaire on comprehension of confidence intervals was developed and piloted. We developed different patient information versions aiming to explain confidence intervals. We used an illustrative example to test three different approaches: (1) short version, (2) "average weight" version and (3) "worm prophylaxis" version. Interviews were conducted using think-aloud and teach-back approaches to test feasibility and analysed using qualitative content analysis. To assess comprehension of confidence intervals, a six-item multiple choice questionnaire was developed and tested in a pilot randomised controlled trial using the online survey software UNIPARK. Here, the average weight version (intervention group) was tested against a standard patient information version on confidence intervals (control group). People with multiple sclerosis were invited to take part using existing mailing-lists of people with multiple sclerosis in Germany and were randomised using the UNIPARK algorithm. Participants were blinded towards group allocation. Primary endpoint was comprehension of confidence intervals, assessed with the six-item multiple choice questionnaire with six points representing perfect knowledge. Feasibility of the patient information versions was tested with 16 people with multiple sclerosis. For the pilot randomised controlled trial, 64 people with multiple sclerosis were randomised (intervention group: n = 36; control group: n = 28). More questions were answered correctly in the intervention group compared to the control group (mean 4.8 vs 3.8, mean difference 1.1 (95 % CI 0.42-1.69), p = 0.002). The questionnaire's internal consistency was moderate (Cronbach's alpha = 0.56). The pilot-phase shows promising results concerning acceptability and feasibility. Pilot randomised controlled trial results indicate that the patient information is well understood and that knowledge gain on confidence intervals can be assessed with a set of six questions. German Clinical Trials Register: DRKS00008561 . Registered 8th of June 2015.

  19. Ethnic Differences in Incidence and Outcomes of Childhood Nephrotic Syndrome.

    PubMed

    Banh, Tonny H M; Hussain-Shamsy, Neesha; Patel, Viral; Vasilevska-Ristovska, Jovanka; Borges, Karlota; Sibbald, Cathryn; Lipszyc, Deborah; Brooke, Josefina; Geary, Denis; Langlois, Valerie; Reddon, Michele; Pearl, Rachel; Levin, Leo; Piekut, Monica; Licht, Christoph P B; Radhakrishnan, Seetha; Aitken-Menezes, Kimberly; Harvey, Elizabeth; Hebert, Diane; Piscione, Tino D; Parekh, Rulan S

    2016-10-07

    Ethnic differences in outcomes among children with nephrotic syndrome are unknown. We conducted a longitudinal study at a single regional pediatric center comparing ethnic differences in incidence from 2001 to 2011 census data and longitudinal outcomes, including relapse rates, time to first relapse, frequently relapsing disease, and use of cyclophosphamide. Among 711 children, 24% were European, 33% were South Asian, 10% were East/Southeast Asian, and 33% were of other origins. Over 10 years, the overall incidence increased from 1.99/100,000 to 4.71/100,000 among children ages 1-18 years old. In 2011, South Asians had a higher incidence rate ratio of 6.61 (95% confidence interval, 3.16 to 15.1) compared with Europeans. East/Southeast Asians had a similar incidence rate ratio (0.76; 95% confidence interval, 0.13 to 2.94) to Europeans. We determined outcomes in 455 children from the three largest ethnic groups with steroid-sensitive disease over a median of 4 years. South Asian and East/Southeast Asian children had significantly lower odds of frequently relapsing disease at 12 months (South Asian: adjusted odds ratio; 0.55; 95% confidence interval, 0.39 to 0.77; East/Southeast Asian: adjusted odds ratio; 0.42; 95% confidence interval, 0.34 to 0.51), fewer subsequent relapses (South Asian: adjusted odds ratio; 0.64; 95% confidence interval, 0.50 to 0.81; East/Southeast Asian: adjusted odds ratio; 0.47; 95% confidence interval, 0.24 to 0.91), lower risk of a first relapse (South Asian: adjusted hazard ratio, 0.74; 95% confidence interval, 0.67 to 0.83; East/Southeast Asian: adjusted hazard ratio, 0.65; 95% CI, 0.63 to 0.68), and lower use of cyclophosphamide (South Asian: adjusted hazard ratio, 0.82; 95% confidence interval, 0.53 to 1.28; East/Southeast Asian: adjusted hazard ratio, 0.54; 95% confidence interval, 0.41 to 0.71) compared with European children. Despite the higher incidence among South Asians, South and East/Southeast Asian children have significantly less complicated clinical outcomes compared with Europeans. Copyright © 2016 by the American Society of Nephrology.

  20. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  1. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  2. Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.

    PubMed

    Fleming, Stephen M; Daw, Nathaniel D

    2017-01-01

    People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation

    PubMed Central

    2017-01-01

    People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960

  4. The idiosyncratic nature of confidence

    PubMed Central

    Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

    2017-01-01

    Confidence is the ‘feeling of knowing’ that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence. PMID:29152591

  5. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study.

    PubMed

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang; Cao, Yang

    2016-08-16

    To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. A time-series study using regional death registry between 2009 and 2010. 8 districts in a large metropolitan area in Northern China. 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (-1.09 to 4.28 vs -1.08 to 3.93) and the PCs-based model (-2.23 to 4.07 vs -2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, -1.12 to 4.85 versus -1.11 versus 4.83. The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Probabilistic Appraisal of Earthquake Hazard Parameters Deduced from a Bayesian Approach in the Northwest Frontier of the Himalayas

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Tsapanos, T. M.; Bayrak, Yusuf; Koravos, G. Ch.

    2013-03-01

    A straightforward Bayesian statistic is applied in five broad seismogenic source zones of the northwest frontier of the Himalayas to estimate the earthquake hazard parameters (maximum regional magnitude M max, β value of G-R relationship and seismic activity rate or intensity λ). For this purpose, a reliable earthquake catalogue which is homogeneous for M W ≥ 5.0 and complete during the period 1900 to 2010 is compiled. The Hindukush-Pamir Himalaya zone has been further divided into two seismic zones of shallow ( h ≤ 70 km) and intermediate depth ( h > 70 km) according to the variation of seismicity with depth in the subduction zone. The estimated earthquake hazard parameters by Bayesian approach are more stable and reliable with low standard deviations than other approaches, but the technique is more time consuming. In this study, quantiles of functions of distributions of true and apparent magnitudes for future time intervals of 5, 10, 20, 50 and 100 years are calculated with confidence limits for probability levels of 50, 70 and 90 % in all seismogenic source zones. The zones of estimated M max greater than 8.0 are related to the Sulaiman-Kirthar ranges, Hindukush-Pamir Himalaya and Himalayan Frontal Thrusts belt; suggesting more seismically hazardous regions in the examined area. The lowest value of M max (6.44) has been calculated in Northern-Pakistan and Hazara syntaxis zone which have estimated lowest activity rate 0.0023 events/day as compared to other zones. The Himalayan Frontal Thrusts belt exhibits higher earthquake magnitude (8.01) in next 100-years with 90 % probability level as compared to other zones, which reveals that this zone is more vulnerable to occurrence of a great earthquake. The obtained results in this study are directly useful for the probabilistic seismic hazard assessment in the examined region of Himalaya.

  7. Population pharmacokinetic-pharmacodynamic modelling of mycophenolic acid in paediatric renal transplant recipients in the early post-transplant period.

    PubMed

    Dong, Min; Fukuda, Tsuyoshi; Cox, Shareen; de Vries, Marij T; Hooper, David K; Goebel, Jens; Vinks, Alexander A

    2014-11-01

    The purpose of this study was to develop a population pharmacokinetic and pharmacodynamic (PK-PD) model for mycophenolic acid (MPA) in paediatric renal transplant recipients in the early post-transplant period. A total of 214 MPA plasma concentrations-time data points from 24 patients were available for PK model development. In 17 out of a total of 24 patients, inosine monophosphate dehydrogenase (IMPDH) enzyme activity measurements (n = 97) in peripheral blood mononuclear cells were available for PK-PD modelling. The PK-PD model was developed using non-linear mixed effects modelling sequentially by 1) developing a population PK model and 2) incorporating IMPDH activity into a PK-PD model using post hoc Bayesian PK parameter estimates. Covariate analysis included patient demographics, co-medication and clinical laboratory data. Non-parametric bootstrapping and prediction-corrected visual predictive checks were performed to evaluate the final models. A two compartment model with a transit compartment absorption best described MPA PK. A non-linear relationship between dose and MPA exposure was observed and was described by a power function in the model. The final population PK parameter estimates (and their 95% confidence intervals) were CL/F, 22 (14.8, 25.2) l h(-1) 70 kg(-1) ; Vc /F, 45.4 (29.6, 55.6) l; Vp /F, 411 (152.6, 1472.6)l; Q/F, 22.4 (16.0, 32.5) l h(-1) ; Ka , 2.5 (1.45, 4.93) h(-1) . Covariate analysis in the PK study identified body weight to be significantly correlated with CL/F. A simplified inhibitory Emax model adequately described the relationship between MPA concentration and IMPDH activity. The final population PK-PD parameter estimates (and their 95% confidence intervals) were: E0 , 3.45 (2.61, 4.56) nmol h(-1)  mg(-1) protein and EC50 , 1.73 (1.16, 3.01) mg l(-1) . Emax was fixed to 0. There were two African-American patients in our study cohorts and both had low IMPDH baseline activities (E0 ) compared with Caucasian patients (mean value 2.13 mg l(-1) vs. 3.86 mg l(-1) ). An integrated population PK-PD model of MPA has been developed in paediatric renal transplant recipients. The current model provides information that will facilitate future studies and may be implemented in a Bayesian algorithm to allow a PK-PD guided therapeutic drug monitoring strategy. © 2014 The British Pharmacological Society.

  8. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  9. Facebook and Twitter vaccine sentiment in response to measles outbreaks.

    PubMed

    Deiner, Michael S; Fathy, Cherie; Kim, Jessica; Niemeyer, Katherine; Ramirez, David; Ackley, Sarah F; Liu, Fengchen; Lietman, Thomas M; Porco, Travis C

    2017-11-01

    Social media posts regarding measles vaccination were classified as pro-vaccination, expressing vaccine hesitancy, uncertain, or irrelevant. Spearman correlations with Centers for Disease Control and Prevention-reported measles cases and differenced smoothed cumulative case counts over this period were reported (using time series bootstrap confidence intervals). A total of 58,078 Facebook posts and 82,993 tweets were identified from 4 January 2009 to 27 August 2016. Pro-vaccination posts were correlated with the US weekly reported cases (Facebook: Spearman correlation 0.22 (95% confidence interval: 0.09 to 0.34), Twitter: 0.21 (95% confidence interval: 0.06 to 0.34)). Vaccine-hesitant posts, however, were uncorrelated with measles cases in the United States (Facebook: 0.01 (95% confidence interval: -0.13 to 0.14), Twitter: 0.0011 (95% confidence interval: -0.12 to 0.12)). These findings may result from more consistent social media engagement by individuals expressing vaccine hesitancy, contrasted with media- or event-driven episodic interest on the part of individuals favoring current policy.

  10. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  11. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing.

    PubMed

    Oono, Ryoko

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.

  12. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  13. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing

    PubMed Central

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889

  14. Primary repair of penetrating colon injuries: a systematic review.

    PubMed

    Singer, Marc A; Nelson, Richard L

    2002-12-01

    Primary repair of penetrating colon injuries is an appealing management option; however, uncertainty about its safety persists. This study was conducted to compare the morbidity and mortality of primary repair with fecal diversion in the management of penetrating colon injuries by use of a meta-analysis of randomized, prospective trials. We searched for prospective, randomized trials in MEDLINE (1966 to November 2001), the Cochrane Library, and EMBase using the terms colon, penetrating, injury, colostomy, prospective, and randomized. Studies were included if they were randomized, controlled trials that compared the outcomes of primary repair with fecal diversion in the management of penetrating colon injuries. Five studies were included. Reviewers performed data extraction independently. Outcomes evaluated from each trial included mortality, total complications, infectious complications, intra-abdominal infections, wound complications, penetrating abdominal trauma index, and length of stay. Peto odds ratios for combined effect were calculated with a 95 percent confidence interval for each outcome. Heterogeneity was also assessed for each outcome. The penetrating abdominal trauma index of included subjects did not differ significantly between studies. Mortality was not significantly different between groups (odds ratio, 1.70; 95 percent confidence interval, 0.51-5.66). However, total complications (odds ratio, 0.28; 95 percent confidence interval, 0.18-0.42), total infectious complications (odds ratio, 0.41; 95 percent confidence interval, 0.27-0.63), abdominal infections including dehiscence (odds ratio, 0.59; 95 percent confidence interval, 0.38-0.94), abdominal infections excluding dehiscence (odds ratio, 0.52; 95 percent confidence interval, 0.31-0.86), wound complications including dehiscence (odds ratio, 0.55; 95 percent confidence interval, 0.34-0.89), and wound complications excluding dehiscence (odds ratio, 0.43; 95 percent confidence interval, 0.25-0.76) all significantly favored primary repair. Meta-analysis of currently published randomized, controlled trials favors primary repair over fecal diversion for penetrating colon injuries.

  15. Bullying and mental health and suicidal behaviour among 14- to 15-year-olds in a representative sample of Australian children.

    PubMed

    Ford, Rebecca; King, Tania; Priest, Naomi; Kavanagh, Anne

    2017-09-01

    To provide the first Australian population-based estimates of the association between bullying and adverse mental health outcomes and suicidality among Australian adolescents. Analysis of data from 3537 adolescents, aged 14-15 years from Wave 6 of the K-cohort of Longitudinal Study of Australian Children was conducted. We used Poisson and linear regression to estimate associations between bullying type (none, relational-verbal, physical, both types) and role (no role, victim, bully, victim and bully), and mental health (measured by the Strengths and Difficulties Questionnaire, symptoms of anxiety and depression) and suicidality. Adolescents involved in bullying had significantly increased Strengths and Difficulties Questionnaire, depression and anxiety scores in all bullying roles and types. In terms of self-harm and suicidality, bully-victims had the highest risk of self-harm (prevalence rate ratio 4.7, 95% confidence interval [3.26, 6.83]), suicidal ideation (prevalence rate ratio 4.3, 95% confidence interval [2.83, 6.49]), suicidal plan (prevalence rate ratio 4.1, 95% confidence interval [2.54, 6.58]) and attempts (prevalence rate ratio 2.7, 95% confidence interval [1.39, 5.13]), followed by victims then bullies. The experience of both relational-verbal and physical bullying was associated with the highest risk of self-harm (prevalence rate ratio 4.6, 95% confidence interval [3.15, 6.60]), suicidal ideation or plans (prevalence rate ratio 4.6, 95% confidence interval [3.05, 6.95]; and 4.8, 95% confidence interval [3.01, 7.64], respectively) or suicide attempts (prevalence rate ratio 3.5, 95% confidence interval [1.90, 6.30]). This study presents the first national, population-based estimates of the associations between bullying by peers and mental health outcomes in Australian adolescents. The markedly increased risk of poor mental health outcomes, self-harm and suicidal ideation and behaviours among adolescents who experienced bullying highlights the importance of addressing bullying in school settings.

  16. Ethnic variations in morbidity and mortality from lower respiratory tract infections: a retrospective cohort study

    PubMed Central

    Steiner, Markus FC; Cezard, Genevieve; Bansal, Narinder; Fischbacher, Colin; Douglas, Anne; Bhopal, Raj; Sheikh, Aziz

    2015-01-01

    Objective There is evidence of substantial ethnic variations in asthma morbidity and the risk of hospitalisation, but the picture in relation to lower respiratory tract infections is unclear. We carried out an observational study to identify ethnic group differences for lower respiratory tract infections. Design A retrospective, cohort study. Setting Scotland. Participants 4.65 million people on whom information was available from the 2001 census, followed from May 2001 to April 2010. Main outcome measures Hospitalisations and deaths (any time following first hospitalisation) from lower respiratory tract infections, adjusted risk ratios and hazard ratios by ethnicity and sex were calculated. We multiplied ratios and confidence intervals by 100, so the reference Scottish White population’s risk ratio and hazard ratio was 100. Results Among men, adjusted risk ratios for lower respiratory tract infection hospitalisation were lower in Other White British (80, 95% confidence interval 73–86) and Chinese (69, 95% confidence interval 56–84) populations and higher in Pakistani groups (152, 95% confidence interval 136–169). In women, results were mostly similar to those in men (e.g. Chinese 68, 95% confidence interval 56–82), although higher adjusted risk ratios were found among women of the Other South Asians group (145, 95% confidence interval 120–175). Survival (adjusted hazard ratio) following lower respiratory tract infection for Pakistani men (54, 95% confidence interval 39–74) and women (31, 95% confidence interval 18–53) was better than the reference population. Conclusions Substantial differences in the rates of lower respiratory tract infections amongst different ethnic groups in Scotland were found. Pakistani men and women had particularly high rates of lower respiratory tract infection hospitalisation. The reasons behind the high rates of lower respiratory tract infection in the Pakistani community are now required. PMID:26152675

  17. Diagnostic accuracy of the Amsler grid and the preferential hyperacuity perimetry in the screening of patients with age-related macular degeneration: systematic review and meta-analysis.

    PubMed

    Faes, L; Bodmer, N S; Bachmann, L M; Thiel, M A; Schmid, M K

    2014-07-01

    To clarify the screening potential of the Amsler grid and preferential hyperacuity perimetry (PHP) in detecting or ruling out wet age-related macular degeneration (AMD). Medline, Scopus and Web of Science (by citation of reference) were searched. Checking of reference lists of review articles and of included articles complemented electronic searches. Papers were selected, assessed, and extracted in duplicate. Systematic review and meta-analysis. Twelve included studies enrolled 903 patients and allowed constructing 27 two-by-two tables. Twelve tables reported on the Amsler grid and its modifications, twelve tables reported on the PHP, one table assessed the MCPT and two tables assessed the M-charts. All but two studies had a case-control design. The pooled sensitivity of studies assessing the Amsler grid was 0.78 (95% confidence intervals; 0.64-0.87), and the pooled specificity was 0.97 (95% confidence intervals; 0.91-0.99). The corresponding positive and negative likelihood ratios were 23.1 (95% confidence intervals; 8.4-64.0) and 0.23 (95% confidence intervals; 0.14-0.39), respectively. The pooled sensitivity of studies assessing the PHP was 0.85 (95% confidence intervals; 0.80-0.89), and specificity was 0.87 (95% confidence intervals; 0.82-0.91). The corresponding positive and negative likelihood ratios were 6.7 (95% confidence intervals; 4.6-9.8) and 0.17 (95% confidence intervals; 0.13-0.23). No pooling was possible for MCPT and M-charts. Results from small preliminary studies show promising test performance characteristics both for the Amsler grid and PHP to rule out wet AMD in the screening setting. To what extent these findings can be transferred to a real clinic practice still needs to be established.

  18. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  19. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

    PubMed

    Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

    1998-09-01

    To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

  20. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  1. An empirical Bayesian and Buhlmann approach with non-homogenous Poisson process

    NASA Astrophysics Data System (ADS)

    Noviyanti, Lienda

    2015-12-01

    All general insurance companies in Indonesia have to adjust their current premium rates according to maximum and minimum limit rates in the new regulation established by the Financial Services Authority (Otoritas Jasa Keuangan / OJK). In this research, we estimated premium rate by means of the Bayesian and the Buhlmann approach using historical claim frequency and claim severity in a five-group risk. We assumed a Poisson distributed claim frequency and a Normal distributed claim severity. Particularly, we used a non-homogenous Poisson process for estimating the parameters of claim frequency. We found that estimated premium rates are higher than the actual current rate. Regarding to the OJK upper and lower limit rates, the estimates among the five-group risk are varied; some are in the interval and some are out of the interval.

  2. Intakes of magnesium, potassium, and calcium and the risk of stroke among men.

    PubMed

    Adebamowo, Sally N; Spiegelman, Donna; Flint, Alan J; Willett, Walter C; Rexrode, Kathryn M

    2015-10-01

    Intakes of magnesium, potassium, and calcium have been inversely associated with the incidence of hypertension, a known risk factor for stroke. However, only a few studies have examined intakes of these cations in relation to risk of stroke. The aim of this study was to investigate whether high intake of magnesium, potassium, and calcium is associated with reduced stroke risk among men. We prospectively examined the associations between intakes of magnesium, potassium, and calcium from diet and supplements, and the risk of incident stroke among 42 669 men in the Health Professionals Follow-up Study, aged 40 to 75 years and free of diagnosed cardiovascular disease and cancer at baseline in 1986. We calculated the hazard ratio of total, ischemic, and haemorrhagic strokes by quintiles of each cation intake, and of a combined dietary score of all three cations, using multivariate Cox proportional hazard models. During 24 years of follow-up, 1547 total stroke events were documented. In multivariate analyses, the relative risks and 95% confidence intervals of total stroke for men in the highest vs. lowest quintile were 0·87 (95% confidence interval, 0·74-1·02; P, trend = 0·04) for dietary magnesium, 0·89 (95% confidence interval, 0·76-1·05; P, trend = 0·10) for dietary potassium, and 0·89 (95% confidence interval, 0·75-1·04; P, trend = 0·25) for dietary calcium intake. The relative risk of total stroke for men in the highest vs. lowest quintile was 0·74 (95% confidence interval, 0·59-0·93; P, trend = 0·003) for supplemental magnesium, 0·66 (95% confidence interval, 0·50-0·86; P, trend = 0·002) for supplemental potassium, and 1·01 (95% confidence interval, 0·84-1·20; P, trend = 0·83) for supplemental calcium intake. For total intake (dietary and supplemental), the relative risk of total stroke for men in the highest vs. lowest quintile was 0·83 (95% confidence interval, 0·70-0·99; P, trend = 0·04) for magnesium, 0·88 (95% confidence interval, 0·75-4; P, trend = 6) for potassium, and 3 (95% confidence interval, 79-09; P, trend = 84) for calcium. Men in the highest quintile for a combined dietary score of all three cations had a multivariate relative risk of 0·79 (95% confidence interval, 0·67-0·92; P, trend = 0·008) for total stroke, compared with those in the lowest. A diet rich in magnesium, potassium, and calcium may contribute to reduced risk of stroke among men. Because of significant collinearity, the independent contribution of each cation is difficult to define. © 2015 World Stroke Organization.

  3. Neuraxial analgesia to increase the success rate of external cephalic version: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Magro-Malosso, Elena Rita; Saccone, Gabriele; Di Tommaso, Mariarosaria; Mele, Michele; Berghella, Vincenzo

    2016-09-01

    External cephalic version is a medical procedure in which the fetus is externally manipulated to assume the cephalic presentation. The use of neuraxial analgesia for facilitating the version has been evaluated in several randomized clinical trials, but its potential effects are still controversial. The objective of the study was to evaluate the effectiveness of neuraxial analgesia as an intervention to increase the success rate of external cephalic version. Searches were performed in electronic databases with the use of a combination of text words related to external cephalic version and neuraxial analgesia from the inception of each database to January 2016. We included all randomized clinical trials of women, with a gestational age ≥36 weeks and breech or transverse fetal presentation, undergoing external cephalic version who were randomized to neuraxial analgesia, including spinal, epidural, or combined spinal-epidural techniques (ie, intervention group) or to a control group (either intravenous analgesia or no treatment). The primary outcome was the successful external cephalic version. The summary measures were reported as relative risk or as mean differences with a 95% confidence interval. Nine randomized clinical trials (934 women) were included in this review. Women who received neuraxial analgesia had a significantly higher incidence of successful external cephalic version (58.4% vs 43.1%; relative risk, 1.44, 95% confidence interval, 1.27-1.64), cephalic presentation in labor (55.1% vs 40.2%; relative risk, 1.37, 95% confidence interval, 1.08-1.73), and vaginal delivery (54.0% vs 44.6%; relative risk, 1.21, 95% confidence interval, 1.04-1.41) compared with those who did not. Women who were randomized to the intervention group also had a significantly lower incidence of cesarean delivery (46.0% vs 55.3%; relative risk, 0.83, 95% confidence interval, 0.71-0.97), maternal discomfort (1.2% vs 9.3%; relative risk, 0.12, 95% confidence interval, 0.02-0.99), and lower pain, assessed by the visual analog scale pain score (mean difference, -4.52 points, 95% confidence interval, -5.35 to 3.69) compared with the control group. The incidences of emergency cesarean delivery (1.6% vs 2.5%; relative risk, 0.63, 95% confidence interval, 0.24-1.70), transient bradycardia (11.8% vs 8.3%; relative risk, 1.42, 95% confidence interval, 0.72-2.80), nonreassuring fetal testing, excluding transient bradycardia, after external cephalic version (6.9% vs 7.4%; relative risk, 0.93, 95% confidence interval, 0.53-1.64), and abruption placentae (0.4% vs 0.4%; relative risk, 1.01, 95% confidence interval, 0.06-16.1) were similar. Administration of neuraxial analgesia significantly increases the success rate of external cephalic version among women with malpresentation at term or late preterm, which then significantly increases the incidence of vaginal delivery. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Colony-level assessment of Brucella and Leptospira in the Guadalupe fur seal, Isla Guadalupe, Mexico.

    PubMed

    Ziehl-Quirós, E Carolina; García-Aguilar, María C; Mellink, Eric

    2017-01-24

    The relatively small population size and restricted distribution of the Guadalupe fur seal Arctocephalus townsendi could make it highly vulnerable to infectious diseases. We performed a colony-level assessment in this species of the prevalence and presence of Brucella spp. and Leptospira spp., pathogenic bacteria that have been reported in several pinniped species worldwide. Forty-six serum samples were collected in 2014 from pups at Isla Guadalupe, the only place where the species effectively reproduces. Samples were tested for Brucella using 3 consecutive serological tests, and for Leptospira using the microscopic agglutination test. For each bacterium, a Bayesian approach was used to estimate prevalence to exposure, and an epidemiological model was used to test the null hypothesis that the bacterium was present in the colony. No serum sample tested positive for Brucella, and the statistical analyses concluded that the colony was bacterium-free with a 96.3% confidence level. However, a Brucella surveillance program would be highly recommendable. Twelve samples were positive (titers 1:50) to 1 or more serovars of Leptospira. The prevalence was calculated at 27.1% (95% credible interval: 15.6-40.3%), and the posterior analyses indicated that the colony was not Leptospira-free with a 100% confidence level. Serovars Icterohaemorrhagiae, Canicola, and Bratislava were detected, but only further research can unveil whether they affect the fur seal population.

  5. Add-On Antihypertensive Medications to Angiotensin-Aldosterone System Blockers in Diabetes: A Comparative Effectiveness Study.

    PubMed

    Schroeder, Emily B; Chonchol, Michel; Shetterly, Susan M; Powers, J David; Adams, John L; Schmittdiel, Julie A; Nichols, Gregory A; O'Connor, Patrick J; Steiner, John F

    2018-05-07

    In individuals with diabetes, the comparative effectiveness of add-on antihypertensive medications added to an angiotensin-converting enzyme inhibitor or angiotensin II receptor blocker on the risk of significant kidney events is unknown. We used an observational, multicenter cohort of 21,897 individuals with diabetes to compare individuals who added β -blockers, dihydropyridine calcium channel blockers, loop diuretics, or thiazide diuretics to angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers. We examined the hazard of significant kidney events, cardiovascular events, and death using Cox proportional hazard models with propensity score weighting. The composite significant kidney event end point was defined as the first occurrence of a ≥30% decline in eGFR to an eGFR<60 ml/min per 1.73 m 2 , initiation of dialysis, or kidney transplant. The composite cardiovascular event end point was defined as the first occurrence of hospitalization for acute myocardial infarction, acute coronary syndrome, stroke, or congestive heart failure; coronary artery bypass grafting; or percutaneous coronary intervention, and it was only examined in those free of cardiovascular disease at baseline. Over a maximum of 5 years, there were 4707 significant kidney events, 1498 deaths, and 818 cardiovascular events. Compared with thiazide diuretics, hazard ratios for significant kidney events for β -blockers, calcium channel blockers, and loop diuretics were 0.81 (95% confidence interval, 0.74 to 0.89), 0.67 (95% confidence interval, 0.58 to 0.78), and 1.19 (95% confidence interval, 1.00 to 1.41), respectively. Compared with thiazide diuretics, hazard ratios of mortality for β -blockers, calcium channel blockers, and loop diuretics were 1.19 (95% confidence interval, 0.97 to 1.44), 0.73 (95% confidence interval, 0.52 to 1.03), and 1.67 (95% confidence interval, 1.31 to 2.13), respectively. Compared with thiazide diuretics, hazard ratios of cardiovascular events for β -blockers, calcium channel blockers, and loop diuretics compared with thiazide diuretics were 1.65 (95% confidence interval, 1.39 to 1.96), 1.05 (95% confidence interval, 0.80 to 1.39), and 1.55 (95% confidence interval, 1.05 to 2.27), respectively. Compared with thiazide diuretics, calcium channel blockers were associated with a lower risk of significant kidney events and a similar risk of cardiovascular events. Copyright © 2018 by the American Society of Nephrology.

  6. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  7. Maternal steroid therapy for fetuses with second-degree immune-mediated congenital atrioventricular block: a systematic review and meta-analysis.

    PubMed

    Ciardulli, Andrea; D'Antonio, Francesco; Magro-Malosso, Elena R; Manzoli, Lamberto; Anisman, Paul; Saccone, Gabriele; Berghella, Vincenzo

    2018-03-07

    To explore the effect of maternal fluorinated steroid therapy on fetuses affected by second-degree immune-mediated congenital atrioventricular block. Studies reporting the outcome of fetuses with second-degree immune-mediated congenital atrioventricular block diagnosed on prenatal ultrasound and treated with fluorinated steroids compared with those not treated were included. The primary outcome was the overall progression of congenital atrioventricular block to either continuous or intermittent third-degree congenital atrioventricular block at birth. Meta-analyses of proportions using random effect model and meta-analyses using individual data random-effect logistic regression were used. Five studies (71 fetuses) were included. The progression rate to congenital atrioventricular block at birth in fetuses treated with steroids was 52% (95% confidence interval 23-79) and in fetuses not receiving steroid therapy 73% (95% confidence interval 39-94). The overall rate of regression to either first-degree, intermittent first-/second-degree or sinus rhythm in fetuses treated with steroids was 25% (95% confidence interval 12-41) compared with 23% (95% confidence interval 8-44) in those not treated. Stable (constant) second-degree congenital atrioventricular block at birth was present in 11% (95% confidence interval 2-27) of cases in the treated group and in none of the newborns in the untreated group, whereas complete regression to sinus rhythm occurred in 21% (95% confidence interval 6-42) of fetuses receiving steroids vs. 9% (95% confidence interval 0-41) of those untreated. There is still limited evidence as to the benefit of administered fluorinated steroids in terms of affecting outcome of fetuses with second-degree immune-mediated congenital atrioventricular block. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.

  8. Active management of the third stage of labor with and without controlled cord traction: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Du, Yongming; Ye, Man; Zheng, Feiyun

    2014-07-01

    To determine the specific effect of controlled cord traction in the third stage of labor in the prevention of postpartum hemorrhage. We searched PubMed, Scopus and Web of Science (inception to 30 October 2013). Randomized controlled trials comparing controlled cord traction with hands-off management in the third stage of labor were included. Five randomized controlled trials involving a total of 30 532 participants were eligible. No significant difference was found between controlled cord traction and hands-off management groups with respect to the incidence of severe postpartum hemorrhage (relative risk 0.91, 95% confidence interval 0.77-1.08), need for blood transfusion (relative risk 0.96, 95% confidence interval 0.69-1.33) or therapeutic uterotonics (relative risk 0.94, 95% confidence interval 0.88-1.01). However, controlled cord traction reduced the incidence of postpartum hemorrhage in general (relative risk 0.93, 95% confidence interval 0.87-0.99; number-needed-to-treat 111, 95% confidence interval 61-666), as well manual removal of the placenta (relative risk 0.70, 95% confidence interval 0.58-0.84) and duration of the third stage of labor (mean difference -3.20, 95% confidence interval -3.21 to -3.19). Controlled cord traction appears to reduce the risk of any postpartum hemorrhage in a general sense, as well as manual removal of the placenta and the duration of the third stage of labor. However, the reduction in the occurrence of severe postpartum hemorrhage, need for additional uterotonics and blood transfusion is not statistically significant. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  9. Loss of DPC4/SMAD4 expression in primary gastrointestinal neuroendocrine tumors is associated with cancer-related death after resection.

    PubMed

    Roland, Christina L; Starker, Lee F; Kang, Y; Chatterjee, Deyali; Estrella, Jeannelyn; Rashid, Asif; Katz, Matthew H; Aloia, Thomas A; Lee, Jeffrey E; Dasari, Arvind; Yao, James C; Fleming, Jason B

    2017-03-01

    Gastrointestinal neuroendocrine tumors have frequent loss of DPC4/SMAD4 expression, a known tumor suppressor. The impact of SMAD4 loss on gastrointestinal neuroendocrine tumors aggressiveness or cancer-related patient outcomes is not defined. We examined the expression of SMAD4 in resected gastrointestinal neuroendocrine tumors and its impact on oncologic outcomes. Patients who underwent complete curative operative resection of gastrointestinal neuroendocrine tumors were identified retrospectively (n = 38). Immunohistochemical staining for SMAD4 expression was scored by a blinded pathologist and correlated with clinicopathologic features and oncologic outcomes. Twenty-nine percent of the gastrointestinal neuroendocrine tumors were SMAD4-negative and 71% SMAD4-positive. Median overall survival was 155 months (95% confidence interval, 102-208 months). Loss of SMAD4 was associated with both decreased median disease-free survival (28 months; 95% confidence interval, 16-40) months compared with 223 months (95% confidence interval, 3-443 months) for SMAD4-positive patients (P = .03) and decreased median disease-specific survival (SMAD4: 137 [95% confidence interval, 81-194] months versus SMAD4-positive: 204 [95% confidence interval, 143-264] months; P = .04). This translated into a decrease in median overall survival (SMAD4-negative: 125 (95% confidence interval, 51-214) months versus SMAD4-positive: 185 (95% confidence interval, 138-232) months; P = .02). Consistent with the known biology of the DPC4/SMAD4 gene, an absence of its protein expression in primary gastrointestinal neuroendocrine tumors was negatively associated with outcomes after curative operative resection. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  11. A Bayesian elicitation of veterinary beliefs regarding systemic dry cow therapy: variation and importance for clinical trial design.

    PubMed

    Higgins, H M; Dryden, I L; Green, M J

    2012-09-15

    The two key aims of this research were: (i) to conduct a probabilistic elicitation to quantify the variation in veterinarians' beliefs regarding the efficacy of systemic antibiotics when used as an adjunct to intra-mammary dry cow therapy and (ii) to investigate (in a Bayesian statistical framework) the strength of future research evidence required (in theory) to change the beliefs of practising veterinary surgeons regarding the efficacy of systemic antibiotics, given their current clinical beliefs. The beliefs of 24 veterinarians in 5 practices in England were quantified as probability density functions. Classic multidimensional scaling revealed major variations in beliefs both within and between veterinary practices which included: confident optimism, confident pessimism and considerable uncertainty. Of the 9 veterinarians interviewed holding further cattle qualifications, 6 shared a confidently pessimistic belief in the efficacy of systemic therapy and whilst 2 were more optimistic, they were also more uncertain. A Bayesian model based on a synthetic dataset from a randomised clinical trial (showing no benefit with systemic therapy) predicted how each of the 24 veterinarians' prior beliefs would alter as the size of the clinical trial increased, assuming that practitioners would update their beliefs rationally in accordance with Bayes' theorem. The study demonstrated the usefulness of probabilistic elicitation for evaluating the diversity and strength of practitioners' beliefs. The major variation in beliefs observed raises interest in the veterinary profession's approach to prescribing essential medicines. Results illustrate the importance of eliciting prior beliefs when designing clinical trials in order to increase the chance that trial data are of sufficient strength to alter the clinical beliefs of practitioners and do not merely serve to satisfy researchers. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Reliability of the identification of the systemic inflammatory response syndrome in critically ill infants and children.

    PubMed

    Juskewitch, Justin E; Prasad, Swati; Salas, Carlos F Santillan; Huskins, W Charles

    2012-01-01

    To assess interobserver reliability of the identification of episodes of the systemic inflammatory response syndrome in critically ill hospitalized infants and children. Retrospective, cross-sectional study of the application of the 2005 consensus definition of systemic inflammatory response syndrome in infants and children by two independent, trained reviewers using information in the electronic medical record. Eighteen-bed pediatric multidisciplinary medical/surgical pediatric intensive care unit. A randomly selected sample of children admitted consecutively to the pediatric intensive care unit between May 1 and September 30, 2009. None. Sixty infants and children were selected from a total of 343 admitted patients. Their median age was 3.9 yrs (interquartile range, 1.5-12.7), 57% were female, and 68% were Caucasian. Nineteen (32%) children were identified by both reviewers as having an episode of systemic inflammatory response syndrome (88% agreement, 95% confidence interval 78-94; κ = 0.75, 95% confidence interval 0.59-0.92). Among these 19 children, agreement between the reviewers for individual systemic inflammatory response syndrome criteria was: temperature (84%, 95% confidence interval 60-97); white blood cell count (89%, 95% confidence interval 67-99); respiratory rate (84%, 95% confidence interval 60-97); and heart rate (68%, 95% confidence interval 33-87). Episodes of systemic inflammatory response syndrome in critically ill infants and children can be identified reproducibly using the consensus definition.

  13. What was different about exposures reported by male Australian Gulf War veterans for the 1991 Persian Gulf War, compared with exposures reported for other deployments?

    PubMed

    Glass, Deborah C; Sim, Malcolm R; Kelsall, Helen L; Ikin, Jill F; McKenzie, Dean; Forbes, Andrew; Ittak, Peter

    2006-07-01

    This study identified chemical and environmental exposures specifically associated with the 1991 Persian Gulf War. Exposures were self-reported in a postal questionnaire, in the period of 2000-2002, by 1,424 Australian male Persian Gulf War veterans in relation to their 1991 Persian Gulf War deployment and by 625 Persian Gulf War veterans and 514 members of a military comparison group in relation to other active deployments. Six of 28 investigated exposures were experienced more frequently during the Persian Gulf War than during other deployments; these were exposure to smoke (odds ratio [OR], 4.4; 95% confidence interval, 3.0-6.6), exposure to dust (OR, 3.7; 95% confidence interval, 2.6-5.3), exposure to chemical warfare agents (OR, 3.9; 95% confidence interval, 2.1-7.9), use of respiratory protective equipment (OR, 13.6; 95% confidence interval, 7.6-26.8), use of nuclear, chemical, and biological protective suits (OR, 8.9; 95% confidence interval, 5.4-15.4), and entering/inspecting enemy equipment (OR, 3.1; 95% confidence interval, 2.1-4.8). Other chemical and environmental exposures were not specific to the Persian Gulf War deployment but were also reported in relation to other deployments. The number of exposures reported was related to service type and number of deployments but not to age or rank.

  14. Statin therapy in lower limb peripheral arterial disease: Systematic review and meta-analysis.

    PubMed

    Antoniou, George A; Fisher, Robert K; Georgiadis, George S; Antoniou, Stavros A; Torella, Francesco

    2014-11-01

    To investigate and analyse the existing evidence supporting statin therapy in patients with lower limb atherosclerotic arterial disease. A systematic search of electronic information sources was undertaken to identify studies comparing cardiovascular outcomes in patients with lower limb peripheral arterial disease treated with a statin and those not receiving a statin. Estimates were combined applying fixed- or random-effects models. Twelve observational cohort studies and two randomised trials reporting 19,368 patients were selected. Statin therapy was associated with reduced all-cause mortality (odds ratio 0.60, 95% confidence interval 0.46-0.78) and incidence of stroke (odds ratio 0.77, 95% confidence interval 0.67-0.89). A trend towards improved cardiovascular mortality (odds ratio 0.62, 95% confidence interval 0.35-1.11), myocardial infarction (odds ratio 0.62, 95% confidence interval 0.38-1.01), and the composite of death/myocardial infarction/stroke (odds ratio 0.91, 95% confidence interval 0.81-1.03), was identified. Meta-analyses of studies performing adjustments showed decreased all-cause mortality in statin users (hazard ratio 0.77, 95% confidence interval 0.68-0.86). Evidence supporting statins' protective role in patients with lower limb peripheral arterial disease is insufficient. Statin therapy seems to be effective in reducing all-cause mortality and the incidence cerebrovascular events in patients diagnosed with peripheral arterial disease. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  16. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    NASA Astrophysics Data System (ADS)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The applicability and advantages are shown in a synthetic example. Therefor, we consider a contaminant source, posing a threat on a drinking water well in an aquifer. Furthermore, we assume uncertainty in geostatistical parameters, boundary conditions and hydraulic gradient. The two mentioned measures evaluate the sensitivity of (1) general prediction confidence and (2) exceedance probability of a legal regulatory threshold value on sampling locations.

  17. A comparison of statistical methods for evaluating matching performance of a biometric identification device: a preliminary report

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.; Hawley, Anne; Livingstone, Katie; Mramba, Nona

    2004-08-01

    Confidence intervals are an important way to assess and estimate a parameter. In the case of biometric identification devices, several approaches to confidence intervals for an error rate have been proposed. Here we evaluate six of these methods. To complete this evaluation, we simulate data from a wide variety of parameter values. This data are simulated via a correlated binary distribution. We then determine how well these methods do at what they say they do: capturing the parameter inside the confidence interval. In addition, the average widths of the various confidence intervals are recorded for each set of parameters. The complete results of this simulation are presented graphically for easy comparison. We conclude by making a recommendation regarding which method performs best.

  18. Factors Associated With Bites to a Child From a Dog Living in the Same Home: A Bi-National Comparison.

    PubMed

    Messam, Locksley L McV; Kass, Philip H; Chomel, Bruno B; Hart, Lynette A

    2018-01-01

    We conducted a veterinary clinic-based retrospective cohort study aimed at identifying child-, dog-, and home-environment factors associated with dog bites to children aged 5-15 years old living in the same home as a dog in Kingston, Jamaica (236) and San Francisco, USA (61). Secondarily, we wished to compare these factors to risk factors for dog bites to the general public. Participant information was collected via interviewer-administered questionnaire using proxy respondents. Data were analyzed using log-binomial regression to estimate relative risks and associated 95% confidence intervals (CIs) for each exposure-dog bite relationship. Exploiting the correspondence between X% confidence intervals and X% Bayesian probability intervals obtained using a uniform prior distribution, for each exposure, we calculated probabilities of the true (population) RRs ≥ 1.25 or ≤0.8, for positive or negative associations, respectively. Boys and younger children were at higher risk for bites, than girls and older children, respectively. Dogs living in a home with no yard space were at an elevated risk (RR = 2.97; 95% CI: 1.06-8.33) of biting a child living in the same home, compared to dogs that had yard space. Dogs routinely allowed inside for some portion of the day (RR = 3.00; 95% CI: 0.94-9.62) and dogs routinely allowed to sleep in a family member's bedroom (RR = 2.82; 95% CI: 1.17-6.81) were also more likely to bite a child living in the home than those that were not. In San Francisco, but less so in Kingston, bites were inversely associated with the number of children in the home. While in Kingston, but not in San Francisco, smaller breeds and dogs obtained for companionship were at higher risk for biting than larger breeds and dogs obtained for protection, respectively. Overall, for most exposures, the observed associations were consistent with population RRs of practical importance (i.e., RRs ≥ 1.25 or ≤0.8). Finally, we found substantial consistency between risk factors for bites to children and previously reported risk factors for general bites.

  19. Inferring late-Holocene climate in the Ecuadorian Andes using a chironomid-based temperature inference model

    NASA Astrophysics Data System (ADS)

    Matthews-Bird, Frazer; Brooks, Stephen J.; Holden, Philip B.; Montoya, Encarni; Gosling, William D.

    2016-06-01

    Presented here is the first chironomid calibration data set for tropical South America. Surface sediments were collected from 59 lakes across Bolivia (15 lakes), Peru (32 lakes), and Ecuador (12 lakes) between 2004 and 2013 over an altitudinal gradient from 150 m above sea level (a.s.l) to 4655 m a.s.l, between 0-17° S and 64-78° W. The study sites cover a mean annual temperature (MAT) gradient of 25 °C. In total, 55 chironomid taxa were identified in the 59 calibration data set lakes. When used as a single explanatory variable, MAT explains 12.9 % of the variance (λ1/λ2 = 1.431). Two inference models were developed using weighted averaging (WA) and Bayesian methods. The best-performing model using conventional statistical methods was a WA (inverse) model (R2jack = 0.890; RMSEPjack = 2.404 °C, RMSEP - root mean squared error of prediction; mean biasjack = -0.017 °C; max biasjack = 4.665 °C). The Bayesian method produced a model with R2jack = 0.909, RMSEPjack = 2.373 °C, mean biasjack = 0.598 °C, and max biasjack = 3.158 °C. Both models were used to infer past temperatures from a ca. 3000-year record from the tropical Andes of Ecuador, Laguna Pindo. Inferred temperatures fluctuated around modern-day conditions but showed significant departures at certain intervals (ca. 1600 cal yr BP; ca. 3000-2500 cal yr BP). Both methods (WA and Bayesian) showed similar patterns of temperature variability; however, the magnitude of fluctuations differed. In general the WA method was more variable and often underestimated Holocene temperatures (by ca. -7 ± 2.5 °C relative to the modern period). The Bayesian method provided temperature anomaly estimates for cool periods that lay within the expected range of the Holocene (ca. -3 ± 3.4 °C). The error associated with both reconstructions is consistent with a constant temperature of 20 °C for the past 3000 years. We would caution, however, against an over-interpretation at this stage. The reconstruction can only currently be deemed qualitative and requires more research before quantitative estimates can be generated with confidence. Increasing the number, and spread, of lakes in the calibration data set would enable the detection of smaller climate signals.

  20. Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.

    PubMed

    Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea

    2017-05-01

    Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.

  1. Expression of Proteins Involved in Epithelial-Mesenchymal Transition as Predictors of Metastasis and Survival in Breast Cancer Patients

    DTIC Science & Technology

    2013-11-01

    Ptrend 0.78 0.62 0.75 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of node...Ptrend 0.71 0.67 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of high-grade tumors... logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for the associations between each of the seven SNPs and

  2. Stress Reduces Conception Probabilities across the Fertile Window: Evidence in Support of Relaxation

    PubMed Central

    Buck Louis, Germaine M.; Lum, Kirsten J.; Sundaram, Rajeshwari; Chen, Zhen; Kim, Sungduk; Lynch, Courtney D.; Schisterman, Enrique F.; Pyper, Cecilia

    2010-01-01

    Objective To assess salivary stress biomarkers (cortisol and alpha-amylase) and female fecundity. Design Prospective cohort design. Setting United Kingdom. Patients 274 women aged 18–40 years attempting pregnancy were followed until pregnant or for six menstrual cycles. Women collected basal saliva samples on day 6 of each cycle, and used fertility monitors to identify ovulation and pregnancy test kits for pregnancy detection. Main Outcome Measures Exposures included salivary cortisol (μg/dL) and alpha-amylase (U/mL) concentrations. Fecundity was measured by time-to-pregnancy and the probability of pregnancy during the fertile window as estimated from discrete-time survival and Bayesian modeling techniques, respectively. Results Alpha-amylase but not cortisol concentrations were negatively associated with fecundity in the first cycle (fecundity odds ratio = 0.85; 95% confidence interval 0.67, 1.09) after adjusting for couples’ ages, intercourse frequency, and alcohol consumption. Significant reductions in the probability of conception across the fertile window during the first cycle attempting pregnancy were observed for women whose salivary concentrations of alpha-amylase were in the upper quartiles in comparison to women in the lower quartiles (HPD −0.284; 95% interval −0.540, −0.029). Conclusions Stress significantly reduced the probability of conception each day during the fertile window, possibly exerting its effect through the sympathetic medullar pathway. PMID:20688324

  3. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  4. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  5. P values in display items are ubiquitous and almost invariably significant: A survey of top science journals

    PubMed Central

    Cristea, Ioana Alina

    2018-01-01

    P values represent a widely used, but pervasively misunderstood and fiercely contested method of scientific inference. Display items, such as figures and tables, often containing the main results, are an important source of P values. We conducted a survey comparing the overall use of P values and the occurrence of significant P values in display items of a sample of articles in the three top multidisciplinary journals (Nature, Science, PNAS) in 2017 and, respectively, in 1997. We also examined the reporting of multiplicity corrections and its potential influence on the proportion of statistically significant P values. Our findings demonstrated substantial and growing reliance on P values in display items, with increases of 2.5 to 14.5 times in 2017 compared to 1997. The overwhelming majority of P values (94%, 95% confidence interval [CI] 92% to 96%) were statistically significant. Methods to adjust for multiplicity were almost non-existent in 1997, but reported in many articles relying on P values in 2017 (Nature 68%, Science 48%, PNAS 38%). In their absence, almost all reported P values were statistically significant (98%, 95% CI 96% to 99%). Conversely, when any multiplicity corrections were described, 88% (95% CI 82% to 93%) of reported P values were statistically significant. Use of Bayesian methods was scant (2.5%) and rarely (0.7%) articles relied exclusively on Bayesian statistics. Overall, wider appreciation of the need for multiplicity corrections is a welcome evolution, but the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome. PMID:29763472

  6. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  7. A data-driven SVR model for long-term runoff prediction and uncertainty analysis based on the Bayesian framework

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun

    2017-06-01

    Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.

  8. P values in display items are ubiquitous and almost invariably significant: A survey of top science journals.

    PubMed

    Cristea, Ioana Alina; Ioannidis, John P A

    2018-01-01

    P values represent a widely used, but pervasively misunderstood and fiercely contested method of scientific inference. Display items, such as figures and tables, often containing the main results, are an important source of P values. We conducted a survey comparing the overall use of P values and the occurrence of significant P values in display items of a sample of articles in the three top multidisciplinary journals (Nature, Science, PNAS) in 2017 and, respectively, in 1997. We also examined the reporting of multiplicity corrections and its potential influence on the proportion of statistically significant P values. Our findings demonstrated substantial and growing reliance on P values in display items, with increases of 2.5 to 14.5 times in 2017 compared to 1997. The overwhelming majority of P values (94%, 95% confidence interval [CI] 92% to 96%) were statistically significant. Methods to adjust for multiplicity were almost non-existent in 1997, but reported in many articles relying on P values in 2017 (Nature 68%, Science 48%, PNAS 38%). In their absence, almost all reported P values were statistically significant (98%, 95% CI 96% to 99%). Conversely, when any multiplicity corrections were described, 88% (95% CI 82% to 93%) of reported P values were statistically significant. Use of Bayesian methods was scant (2.5%) and rarely (0.7%) articles relied exclusively on Bayesian statistics. Overall, wider appreciation of the need for multiplicity corrections is a welcome evolution, but the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome.

  9. Assessing equity of healthcare utilization in rural China: results from nationally representative surveys from 1993 to 2008

    PubMed Central

    2013-01-01

    Background The phenomenon of inequitable healthcare utilization in rural China interests policymakers and researchers; however, the inequity has not been actually measured to present the magnitude and trend using nationally representative data. Methods Based on the National Health Service Survey (NHSS) in 1993, 1998, 2003, and 2008, the Probit model with the probability of outpatient visit and the probability of inpatient visit as the dependent variables is applied to estimate need-predicted healthcare utilization. Furthermore, need-standardized healthcare utilization is assessed through indirect standardization method. Concentration index is measured to reflect income-related inequity of healthcare utilization. Results The concentration index of need-standardized outpatient utilization is 0.0486[95% confidence interval (0.0399, 0.0574)], 0.0310[95% confidence interval (0.0229, 0.0390)], 0.0167[95% confidence interval (0.0069, 0.0264)] and −0.0108[95% confidence interval (−0.0213, -0.0004)] in 1993, 1998, 2003 and 2008, respectively. For inpatient service, the concentration index is 0.0529[95% confidence interval (0.0349, 0.0709)], 0.1543[95% confidence interval (0.1356, 0.1730)], 0.2325[95% confidence interval (0.2132, 0.2518)] and 0.1313[95% confidence interval (0.1174, 0.1451)] in 1993, 1998, 2003 and 2008, respectively. Conclusions Utilization of both outpatient and inpatient services was pro-rich in rural China with the exception of outpatient service in 2008. With the same needs for healthcare, rich rural residents utilized more healthcare service than poor rural residents. Compared to utilization of outpatient service, utilization of inpatient service was more inequitable. Inequity of utilization of outpatient service reduced gradually from 1993 to 2008; meanwhile, inequity of inpatient service utilization increased dramatically from 1993 to 2003 and decreased significantly from 2003 to 2008. Recent attempts in China to increase coverage of insurance and primary healthcare could be a contributing factor to counteract the inequity of outpatient utilization, but better benefit packages and delivery strategies still need to be tested and scaled up to reduce future inequity in inpatient utilization in rural China. PMID:23688260

  10. Exposure to power frequency electric fields and the risk of childhood cancer in the UK

    PubMed Central

    Skinner, J; Mee, T J; Blackwell, R P; Maslanyj, M P; Simpson, J; Allen, S G; Day, N E

    2002-01-01

    The United Kingdom Childhood Cancer Study, a population-based case–control study covering the whole of Great Britain, incorporated a pilot study measuring electric fields. Measurements were made in the homes of 473 children who were diagnosed with a malignant neoplasm between 1992 and 1996 and who were aged 0–14 at diagnosis, together with 453 controls matched on age, sex and geographical location. Exposure assessments comprised resultant spot measurements in the child's bedroom and the family living-room. Temporal stability of bedroom fields was investigated through continuous logging of the 48-h vertical component at the child's bedside supported by repeat spot measurements. The principal exposure metric used was the mean of the pillow and bed centre measurements. For the 273 cases and 276 controls with fully validated measures, comparing those with a measured electric field exposure ⩾20 V m−1 to those in a reference category of exposure <10 V m−1, odds ratios of 1.31 (95% confidence interval 0.68–2.54) for acute lymphoblastic leukaemia, 1.32 (95% confidence interval 0.73–2.39) for total leukaemia, 2.12 (95% confidence interval 0.78–5.78) for central nervous system cancers and 1.26 (95% confidence interval 0.77–2.07) for all malignancies were obtained. When considering the 426 cases and 419 controls with no invalid measures, the corresponding odds ratios were 0.86 (95% confidence interval 0.49–1.51) for acute lymphoblastic leukaemia, 0.93 (95% confidence interval 0.56–1.54) for total leukaemia, 1.43 (95% confidence interval 0.68–3.02) for central nervous system cancers and 0.90 (95% confidence interval 0.59–1.35) for all malignancies. With exposure modelled as a continuous variable, odds ratios for an increase in the principal metric of 10 V m−1 were close to unity for all disease categories, never differing significantly from one. British Journal of Cancer (2002) 87, 1257–1266. doi:10.1038/sj.bjc.6600602 www.bjcancer.com © 2002 Cancer Research UK PMID:12439715

  11. A risk score for predicting coronary artery disease in women with angina pectoris and abnormal stress test finding.

    PubMed

    Lo, Monica Y; Bonthala, Nirupama; Holper, Elizabeth M; Banks, Kamakki; Murphy, Sabina A; McGuire, Darren K; de Lemos, James A; Khera, Amit

    2013-03-15

    Women with angina pectoris and abnormal stress test findings commonly have no epicardial coronary artery disease (CAD) at catheterization. The aim of the present study was to develop a risk score to predict obstructive CAD in such patients. Data were analyzed from 337 consecutive women with angina pectoris and abnormal stress test findings who underwent cardiac catheterization at our center from 2003 to 2007. Forward selection multivariate logistic regression analysis was used to identify the independent predictors of CAD, defined by ≥50% diameter stenosis in ≥1 epicardial coronary artery. The independent predictors included age ≥55 years (odds ratio 2.3, 95% confidence interval 1.3 to 4.0), body mass index <30 kg/m(2) (odds ratio 1.9, 95% confidence interval 1.1 to 3.1), smoking (odds ratio 2.6, 95% confidence interval 1.4 to 4.8), low high-density lipoprotein cholesterol (odds ratio 2.9, 95% confidence interval 1.5 to 5.5), family history of premature CAD (odds ratio 2.4, 95% confidence interval 1.0 to 5.7), lateral abnormality on stress imaging (odds ratio 2.8, 95% confidence interval 1.5 to 5.5), and exercise capacity <5 metabolic equivalents (odds ratio 2.4, 95% confidence interval 1.1 to 5.6). Assigning each variable 1 point summed to constitute a risk score, a graded association between the score and prevalent CAD (ptrend <0.001). The risk score demonstrated good discrimination with a cross-validated c-statistic of 0.745 (95% confidence interval 0.70 to 0.79), and an optimized cutpoint of a score of ≤2 included 62% of the subjects and had a negative predictive value of 80%. In conclusion, a simple clinical risk score of 7 characteristics can help differentiate those more or less likely to have CAD among women with angina pectoris and abnormal stress test findings. This tool, if validated, could help to guide testing strategies in women with angina pectoris. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Burn, Donald H.; Concepción Ausín, María.; Wiper, Michael P.

    2016-03-01

    A time-varying risk analysis is proposed for an adaptive design framework in nonstationary conditions arising from climate change. A Bayesian, dynamic conditional copula is developed for modeling the time-varying dependence structure between mixed continuous and discrete multiattributes of multidimensional hydrometeorological phenomena. Joint Bayesian inference is carried out to fit the marginals and copula in an illustrative example using an adaptive, Gibbs Markov Chain Monte Carlo (MCMC) sampler. Posterior mean estimates and credible intervals are provided for the model parameters and the Deviance Information Criterion (DIC) is used to select the model that best captures different forms of nonstationarity over time. This study also introduces a fully Bayesian, time-varying joint return period for multivariate time-dependent risk analysis in nonstationary environments. The results demonstrate that the nature and the risk of extreme-climate multidimensional processes are changed over time under the impact of climate change, and accordingly the long-term decision making strategies should be updated based on the anomalies of the nonstationary environment.

  13. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks

    PubMed Central

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing—with its unique statistical properties—became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca. PMID:28817636

  14. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks.

    PubMed

    Ramachandran, Parameswaran; Sánchez-Taltavull, Daniel; Perkins, Theodore J

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing-with its unique statistical properties-became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca.

  15. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  16. The Role of Short-Term Memory Capacity and Task Experience for Overconfidence in Judgment under Uncertainty

    ERIC Educational Resources Information Center

    Hansson, Patrik; Juslin, Peter; Winman, Anders

    2008-01-01

    Research with general knowledge items demonstrates extreme overconfidence when people estimate confidence intervals for unknown quantities, but close to zero overconfidence when the same intervals are assessed by probability judgment. In 3 experiments, the authors investigated if the overconfidence specific to confidence intervals derives from…

  17. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  18. Towards the estimation of effect measures in studies using respondent-driven sampling.

    PubMed

    Rotondi, Michael A

    2014-06-01

    Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.

  19. To P or Not to P: Backing Bayesian Statistics.

    PubMed

    Buchinsky, Farrel J; Chadha, Neil K

    2017-12-01

    In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.

  20. Generalized anxiety disorder prevalence and comorbidity with depression in coronary heart disease: a meta-analysis.

    PubMed

    Tully, Phillip J; Cosh, Suzanne M

    2013-12-01

    Generalized anxiety disorder prevalence and comorbidity with depression in coronary heart disease patients remain unquantified. Systematic searching of Medline, Embase, SCOPUS and PsycINFO databases revealed 1025 unique citations. Aggregate generalized anxiety disorder prevalence (12 studies, N = 3485) was 10.94 per cent (95% confidence interval: 7.8-13.99) and 13.52 per cent (95% confidence interval: 8.39-18.66) employing Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria (random effects). Lifetime generalized anxiety disorder prevalence was 25.80 per cent (95% confidence interval: 20.84-30.77). In seven studies, modest correlation was evident between generalized anxiety disorder and depression, Fisher's Z = .30 (95% confidence interval: .19-.42), suggesting that each psychiatric disorder is best conceptualized as contributing unique variance to coronary heart disease prognosis.

  1. Autonomous motivation mediates the relation between goals for physical activity and physical activity behavior in adolescents.

    PubMed

    Duncan, Michael J; Eyre, Emma Lj; Bryant, Elizabeth; Seghers, Jan; Galbraith, Niall; Nevill, Alan M

    2017-04-01

    Overall, 544 children (mean age ± standard deviation = 14.2 ± .94 years) completed self-report measures of physical activity goal content, behavioral regulations, and physical activity behavior. Body mass index was determined from height and mass. The indirect effect of intrinsic goal content on physical activity was statistically significant via autonomous ( b = 162.27; 95% confidence interval [89.73, 244.70]), but not controlled motivation ( b = 5.30; 95% confidence interval [-39.05, 45.16]). The indirect effect of extrinsic goal content on physical activity was statistically significant via autonomous ( b = 106.25; 95% confidence interval [63.74, 159.13]) but not controlled motivation ( b = 17.28; 95% confidence interval [-31.76, 70.21]). Weight status did not alter these findings.

  2. PBPK modeling of the cis- and trans-permethrin isomers and their major urinary metabolites in rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willemin, Marie-Emilie; Sorbonne University, Université de Technologie de Compiègne, CNRS, UMR 7338 Biomechanics and Bioengineering, Centre de recherche Royallieu CS 60319,60203 Compiègnee Cedex; Desmots, Sophie

    2016-03-01

    Permethrin, a pyrethroid insecticide, is suspected to induce neuronal and hormonal disturbances in humans. The widespread exposure of the populations has been confirmed by the detection of the urinary metabolites of permethrin in biomonitoring studies. Permethrin is a chiral molecule presenting two forms, the cis and the trans isomers. Because in vitro studies indicated a metabolic interaction between the trans and cis isomers of permethrin, we adapted and calibrated a PBPK model for trans- and cis-permethrin separately in rats. The model also describes the toxicokinetics of three urinary metabolites, cis- and trans-3-(2,2 dichlorovinyl)-2,2-dimethyl-(1-cyclopropane) carboxylic acid (cis- and trans-DCCA), 3-phenoxybenzoic acidmore » (3-PBA) and 4′OH-phenoxybenzoic acid (4′-OH-PBA). In vivo experiments performed in Sprague–Dawley rats were used to calibrate the PBPK model in a Bayesian framework. The model captured well the toxicokinetics of permethrin isomers and their metabolites including the rapid absorption, the accumulation in fat, the extensive metabolism of the parent compounds, and the rapid elimination of metabolites in urine. Average hepatic clearances in rats were estimated to be 2.4 and 5.7 L/h/kg for cis- and trans-permethrin, respectively. High concentrations of the metabolite 4′-OH-PBA were measured in urine compared to cis- and trans-DCCA and 3-PBA. The confidence in the extended PBPK model was then confirmed by good predictions of published experimental data obtained using the isomers mixture. The extended PBPK model could be extrapolated to humans to predict the internal dose of exposure to permethrin from biomonitoring data in urine. - Highlights: • A PBPK model of isomers of permethrin and its urinary metabolites was developed. • A quantitative link was established for permethrin and its biomarkers of exposure. • The bayesian framework allows getting confidence interval on the estimated parameters. • The PBPK model can be extrapolated to human and used in a reverse dosimetry context.« less

  3. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  4. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  5. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. © 2015, The International Biometric Society.

  6. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  7. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  8. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

    PubMed

    Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

    2015-03-01

    We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.

  9. A Bayesian CUSUM plot: Diagnosing quality of treatment.

    PubMed

    Rosthøj, Steen; Jacobsen, Rikke-Line

    2017-12-01

    To present a CUSUM plot based on Bayesian diagnostic reasoning displaying evidence in favour of "healthy" rather than "sick" quality of treatment (QOT), and to demonstrate a technique using Kaplan-Meier survival curves permitting application to case series with ongoing follow-up. For a case series with known final outcomes: Consider each case a diagnostic test of good versus poor QOT (expected vs. increased failure rates), determine the likelihood ratio (LR) of the observed outcome, convert LR to weight taking log to base 2, and add up weights sequentially in a plot showing how many times odds in favour of good QOT have been doubled. For a series with observed survival times and an expected survival curve: Divide the curve into time intervals, determine "healthy" and specify "sick" risks of failure in each interval, construct a "sick" survival curve, determine the LR of survival or failure at the given observation times, convert to weights, and add up. The Bayesian plot was applied retrospectively to 39 children with acute lymphoblastic leukaemia with completed follow-up, using Nordic collaborative results as reference, showing equal odds between good and poor QOT. In the ongoing treatment trial, with 22 of 37 children still at risk for event, QOT has been monitored with average survival curves as reference, odds so far favoring good QOT 2:1. QOT in small patient series can be assessed with a Bayesian CUSUM plot, retrospectively when all treatment outcomes are known, but also in ongoing series with unfinished follow-up. © 2017 John Wiley & Sons, Ltd.

  10. Risk factors for pneumonic and ulceroglandular tularaemia in Finland: a population-based case-control study.

    PubMed

    Rossow, H; Ollgren, J; Klemets, P; Pietarinen, I; Saikku, J; Pekkanen, E; Nikkari, S; Syrjälä, H; Kuusi, M; Nuorti, J P

    2014-10-01

    Few population-based data are available on factors associated with pneumonic and ulceroglandular type B tularaemia. We conducted a case-control study during a large epidemic in 2000. Laboratory-confirmed case patients were identified through active surveillance and matched control subjects (age, sex, residency) from the national population information system. Data were collected using a self-administered questionnaire. A conditional logistic regression model addressing missing data with Bayesian full-likelihood modelling included 227 case patients and 415 control subjects; reported mosquito bites [adjusted odds ratio (aOR) 9·2, 95% confidence interval (CI) 4·4-22, population-attributable risk (PAR) 82%] and farming activities (aOR 4·3, 95% CI 2·5-7·2, PAR 32%) were independently associated with ulceroglandular tularaemia, whereas exposure to hay dust (aOR 6·6, 95% CI 1·9-25·4, PAR 48%) was associated with pneumonic tularaemia. Although the bulk of tularaemia type B disease burden is attributable to mosquito bites, risk factors for ulceroglandular and pneumonic forms of tularaemia are different, enabling targeting of prevention efforts accordingly.

  11. Calculating Confidence Intervals for Regional Economic Impacts of Recreastion by Bootstrapping Visitor Expenditures

    Treesearch

    Donald B.K. English

    2000-01-01

    In this paper I use bootstrap procedures to develop confidence intervals for estimates of total industrial output generated per thousand tourist visits. Mean expenditures from replicated visitor expenditure data included weights to correct for response bias. Impacts were estimated with IMPLAN. Ninety percent interval endpoints were 6 to 16 percent above or below the...

  12. Bayesian network modeling applied to coastal geomorphology: lessons learned from a decade of experimentation and application

    NASA Astrophysics Data System (ADS)

    Plant, N. G.; Thieler, E. R.; Gutierrez, B.; Lentz, E. E.; Zeigler, S. L.; Van Dongeren, A.; Fienen, M. N.

    2016-12-01

    We evaluate the strengths and weaknesses of Bayesian networks that have been used to address scientific and decision-support questions related to coastal geomorphology. We will provide an overview of coastal geomorphology research that has used Bayesian networks and describe what this approach can do and when it works (or fails to work). Over the past decade, Bayesian networks have been formulated to analyze the multi-variate structure and evolution of coastal morphology and associated human and ecological impacts. The approach relates observable system variables to each other by estimating discrete correlations. The resulting Bayesian-networks make predictions that propagate errors, conduct inference via Bayes rule, or both. In scientific applications, the model results are useful for hypothesis testing, using confidence estimates to gage the strength of tests while applications to coastal resource management are aimed at decision-support, where the probabilities of desired ecosystems outcomes are evaluated. The range of Bayesian-network applications to coastal morphology includes emulation of high-resolution wave transformation models to make oceanographic predictions, morphologic response to storms and/or sea-level rise, groundwater response to sea-level rise and morphologic variability, habitat suitability for endangered species, and assessment of monetary or human-life risk associated with storms. All of these examples are based on vast observational data sets, numerical model output, or both. We will discuss the progression of our experiments, which has included testing whether the Bayesian-network approach can be implemented and is appropriate for addressing basic and applied scientific problems and evaluating the hindcast and forecast skill of these implementations. We will present and discuss calibration/validation tests that are used to assess the robustness of Bayesian-network models and we will compare these results to tests of other models. This will demonstrate how Bayesian networks are used to extract new insights about coastal morphologic behavior, assess impacts to societal and ecological systems, and communicate probabilistic predictions to decision makers.

  13. Does blood transfusion affect intermediate survival after coronary artery bypass surgery?

    PubMed

    Mikkola, R; Heikkinen, J; Lahtinen, J; Paone, R; Juvonen, T; Biancari, F

    2013-01-01

    The aim of this study was to investigate the impact of transfusion of blood products on intermediate outcome after coronary artery bypass surgery. Complete data on perioperative blood transfusion in patients undergoing coronary artery bypass surgery were available from 2001 patients who were operated at our institution. Transfusion of any blood product (relative risk = 1.678, 95% confidence interval = 1.087-2.590) was an independent predictor of all-cause mortality. The additive effect of each blood product on all-cause mortality (relative risk = 1.401, 95% confidence interval = 1.203-1.630) and cardiac mortality (relative risk = 1.553, 95% confidence interval = 1.273-1.895) was evident when the sum of each blood product was included in the regression models. However, when single blood products were included in the regression model, transfusion of fresh frozen plasma/Octaplas® was the only blood product associated with increased risk of all-cause mortality (relative risk = 1.692, 95% confidence interval = 1.222-2.344) and cardiac mortality (relative risk = 2.125, 95% confidence interval = 1.414-3.194). The effect of blood product transfusion was particularly evident during the first three postoperative months. Since follow-up was truncated at 3 months, transfusion of any blood product was a significant predictor of all-cause mortality (relative risk = 2.998, 95% confidence interval = 1.053-0.537). Analysis of patients who survived or had at least 3 months of potential follow-up showed that transfusion of any blood product was not associated with a significantly increased risk of intermediate all-cause mortality (relative risk = 1.430, 95% confidence interval = 0.880-2.323). Transfusion of any blood product is associated with a significant risk of all-cause and cardiac mortality after coronary artery bypass surgery. Such a risk seems to be limited to the early postoperative period and diminishes later on. Among blood products, perioperative use of fresh frozen plasma or Octaplas seems to be the main determinant of mortality.

  14. Rapid Contour-based Segmentation for 18F-FDG PET Imaging of Lung Tumors by Using ITK-SNAP: Comparison to Expert-based Segmentation.

    PubMed

    Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel

    2018-04-03

    Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.

  15. Amnioinfusion for meconium-stained liquor in labour.

    PubMed

    Hofmeyr, G J

    2000-01-01

    Amnioinfusion aims to prevent or relieve umbilical cord compression during labour by infusing a solution into the uterine cavity. It is also thought to dilute meconium when present in the amniotic fluid and so reduce the risk of meconium aspiration. However it may be that the mechanism of effect is that it corrects oligohydramnios (reduced amniotic fluid), for which thick meconium staining is a marker. The objective of this review was to assess the effects of amnioinfusion for meconium-stained liquor on perinatal outcome. The Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Controlled Trials Register were searched. Randomised trials comparing amnioinfusion with no amnioinfusion for women in labour with moderate or thick meconium-staining of the amniotic fluid. Eligibility and trial quality were assessed by one reviewer. Ten studies, most involving small numbers of participants, were included. Under standard perinatal surveillance, amnioinfusion was associated with a reduction in the following: heavy meconium staining of the liquor (relative risk 0.03, 95% confidence interval 0.01 to 0.15); variable fetal heart rate deceleration (relative risk 0.47, 95% confidence interval 0.24 to 0. 90); and a trend to reduced caesarean section overall (relative risk 0.83, 95% confidence interval 0.69 to 1.00). No perinatal deaths were reported. Under limited perinatal surveillance, amnioinfusion was associated with a reduction in the following: meconium aspiration syndrome (relative risk 0.24, 95% confidence interval 0. 12 to 0.48); neonatal hypoxic ischaemic encephalopathy (relative risk 0.07, 95% confidence interval 0.01 to 0.56) and neonatal ventilation or intensive care unit admission (relative risk 0.56, 95% confidence interval 0.39 to 0.79); there was a trend towards reduced perinatal mortality (relative risk 0.34, 95% confidence interval 0.11 to 1.06). Amnioinfusion is associated with improvements in perinatal outcome, particularly in settings where facilities for perinatal surveillance are limited. The trials reviewed are too small to address the possibility of rare but serious maternal adverse effects of amnioinfusion.

  16. Amnioinfusion for meconium-stained liquor in labour.

    PubMed

    Hofmeyr, G J

    2002-01-01

    Amnioinfusion aims to prevent or relieve umbilical cord compression during labour by infusing a solution into the uterine cavity. It is also thought to dilute meconium when present in the amniotic fluid and so reduce the risk of meconium aspiration. However, it may be that the mechanism of effect is that it corrects oligohydramnios (reduced amniotic fluid), for which thick meconium staining is a marker. The objective of this review was to assess the effects of amnioinfusion for meconium-stained liquor on perinatal outcome. The Cochrane Pregnancy and Childbirth Group trials register (October 2001) and the Cochrane Controlled Trials Register (Issue 3, 2001) were searched. Randomised trials comparing amnioinfusion with no amnioinfusion for women in labour with moderate or thick meconium-staining of the amniotic fluid. Eligibility and trial quality were assessed by one reviewer. Twelve studies, most involving small numbers of participants, were included. Under standard perinatal surveillance, amnioinfusion was associated with a reduction in the following: heavy meconium staining of the liquor (relative risk 0.03, 95% confidence interval 0.01 to 0.15); variable fetal heart rate deceleration (relative risk 0.65, 95% confidence interval 0.49 to 0.88); and reduced caesarean section overall (relative risk 0.82, 95% confidence interval 0.69 to 1.97). No perinatal deaths were reported. Under limited perinatal surveillance, amnioinfusion was associated with a reduction in the following: meconium aspiration syndrome (relative risk 0.24, 95% confidence interval 0.12 to 0.48); neonatal hypoxic ischaemic encephalopathy (relative risk 0.07, 95% confidence interval 0.01 to 0.56) and neonatal ventilation or intensive care unit admission (relative risk 0.56, 95% confidence interval 0.39 to 0.79); there was a trend towards reduced perinatal mortality (relative risk 0.34, 95% confidence interval 0.11 to 1.06). Amnioinfusion is associated with improvements in perinatal outcome, particularly in settings where facilities for perinatal surveillance are limited. The trials reviewed are too small to address the possibility of rare but serious maternal adverse effects of amnioinfusion.

  17. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  18. Association between GFR Estimated by Multiple Methods at Dialysis Commencement and Patient Survival

    PubMed Central

    Wong, Muh Geot; Pollock, Carol A.; Cooper, Bruce A.; Branley, Pauline; Collins, John F.; Craig, Jonathan C.; Kesselhut, Joan; Luxton, Grant; Pilmore, Andrew; Harris, David C.

    2014-01-01

    Summary Background and objectives The Initiating Dialysis Early and Late study showed that planned early or late initiation of dialysis, based on the Cockcroft and Gault estimation of GFR, was associated with identical clinical outcomes. This study examined the association of all-cause mortality with estimated GFR at dialysis commencement, which was determined using multiple formulas. Design, setting, participants, & measurements Initiating Dialysis Early and Late trial participants were stratified into tertiles according to the estimated GFR measured by Cockcroft and Gault, Modification of Diet in Renal Disease, or Chronic Kidney Disease-Epidemiology Collaboration formula at dialysis commencement. Patient survival was determined using multivariable Cox proportional hazards model regression. Results Only Initiating Dialysis Early and Late trial participants who commenced on dialysis were included in this study (n=768). A total of 275 patients died during the study. After adjustment for age, sex, racial origin, body mass index, diabetes, and cardiovascular disease, no significant differences in survival were observed between estimated GFR tertiles determined by Cockcroft and Gault (lowest tertile adjusted hazard ratio, 1.11; 95% confidence interval, 0.82 to 1.49; middle tertile hazard ratio, 1.29; 95% confidence interval, 0.96 to 1.74; highest tertile reference), Modification of Diet in Renal Disease (lowest tertile hazard ratio, 0.88; 95% confidence interval, 0.63 to 1.24; middle tertile hazard ratio, 1.20; 95% confidence interval, 0.90 to 1.61; highest tertile reference), and Chronic Kidney Disease-Epidemiology Collaboration equations (lowest tertile hazard ratio, 0.93; 95% confidence interval, 0.67 to 1.27; middle tertile hazard ratio, 1.15; 95% confidence interval, 0.86 to 1.54; highest tertile reference). Conclusion Estimated GFR at dialysis commencement was not significantly associated with patient survival, regardless of the formula used. However, a clinically important association cannot be excluded, because observed confidence intervals were wide. PMID:24178976

  19. VizieR Online Data Catalog: Fermi/GBM GRB time-resolved spectral catalog (Yu+, 2016)

    NASA Astrophysics Data System (ADS)

    Yu, H.-F.; Preece, R. D.; Greiner, J.; Bhat, P. N.; Bissaldi, E.; Briggs, M. S.; Cleveland, W. H.; Connaughton, V.; Goldstein, A.; von Kienlin; A.; Kouveliotou, C.; Mailyan, B.; Meegan, C. A.; Paciesas, W. S.; Rau, A.; Roberts, O. J.; Veres, P.; Wilson-Hodge, C.; Zhang, B.-B.; van Eerten, H. J.

    2016-01-01

    Time-resolved spectral analysis results of BEST models: for each spectrum GRB name using the Fermi GBM trigger designation, spectrum number within individual burst, start time Tstart and end time Tstop for the time bin, BEST model, best-fit parameters of the BEST model, value of CSTAT per degrees of freedom, 10keV-1MeV photon and energy flux are given. Ep evolutionary trends: for each burst GRB name, number of spectra with Ep, Spearman's Rank Correlation Coefficients between Ep_ and photon flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficients between Ep and energy flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficient between Ep and time and 90%, 95%, and 99% confidence intervals, trends as determined by computer for 90%, 95%, and 99% confidence intervals, trends as determined by human eyes are given. (2 data files).

  20. Prokinetics for the treatment of functional dyspepsia: Bayesian network meta-analysis.

    PubMed

    Yang, Young Joo; Bang, Chang Seok; Baik, Gwang Ho; Park, Tae Young; Shin, Suk Pyo; Suk, Ki Tae; Kim, Dong Joon

    2017-06-26

    Controversies persist regarding the effect of prokinetics for the treatment of functional dyspepsia (FD). This study aimed to assess the comparative efficacy of prokinetic agents for the treatment of FD. Randomized controlled trials (RCTs) of prokinetics for the treatment of FD were identified from core databases. Symptom response rates were extracted and analyzed using odds ratios (ORs). A Bayesian network meta-analysis was performed using the Markov chain Monte Carlo method in WinBUGS and NetMetaXL. In total, 25 RCTs, which included 4473 patients with FD who were treated with 6 different prokinetics or placebo, were identified and analyzed. Metoclopramide showed the best surface under the cumulative ranking curve (SUCRA) probability (92.5%), followed by trimebutine (74.5%) and mosapride (63.3%). However, the therapeutic efficacy of metoclopramide was not significantly different from that of trimebutine (OR:1.32, 95% credible interval: 0.27-6.06), mosapride (OR: 1.99, 95% credible interval: 0.87-4.72), or domperidone (OR: 2.04, 95% credible interval: 0.92-4.60). Metoclopramide showed better efficacy than itopride (OR: 2.79, 95% credible interval: 1.29-6.21) and acotiamide (OR: 3.07, 95% credible interval: 1.43-6.75). Domperidone (SUCRA probability 62.9%) showed better efficacy than itopride (OR: 1.37, 95% credible interval: 1.07-1.77) and acotiamide (OR: 1.51, 95% credible interval: 1.04-2.18). Metoclopramide, trimebutine, mosapride, and domperidone showed better efficacy for the treatment of FD than itopride or acotiamide. Considering the adverse events related to metoclopramide or domperidone, the short-term use of these agents or the alternative use of trimebutine or mosapride could be recommended for the symptomatic relief of FD.

  1. Retrodiction for Bayesian multiple-hypothesis/multiple-target tracking in densely cluttered environment

    NASA Astrophysics Data System (ADS)

    Koch, Wolfgang

    1996-05-01

    Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.

  2. Bayesian estimation of dynamic matching function for U-V analysis in Japan

    NASA Astrophysics Data System (ADS)

    Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro

    2012-05-01

    In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.

  3. Pregnancy and birth outcomes in couples with infertility with and without assisted reproductive technology: with an emphasis on US population-based studies.

    PubMed

    Luke, Barbara

    2017-09-01

    Infertility, defined as the inability to conceive within 1 year of unprotected intercourse, affects an estimated 80 million individuals worldwide, or 10-15% of couples of reproductive age. Assisted reproductive technology includes all infertility treatments to achieve conception; in vitro fertilization is the process by which an oocyte is fertilized by semen outside the body; non-in vitro fertilization assisted reproductive technology treatments include ovulation induction, artificial insemination, and intrauterine insemination. Use of assisted reproductive technology has risen steadily in the United States during the past 2 decades due to several reasons, including childbearing at older maternal ages and increasing insurance coverage. The number of in vitro fertilization cycles in the United States has nearly doubled from 2000 through 2013 and currently 1.7% of all live births in the United States are the result of this technology. Since the birth of the first child from in vitro fertilization >35 years ago, >5 million babies have been born from in vitro fertilization, half within the past 6 years. It is estimated that 1% of singletons, 19% of twins, and 25% of triplet or higher multiples are due to in vitro fertilization, and 4%, 21%, and 52%, respectively, are due to non-in vitro fertilization assisted reproductive technology. Higher plurality at birth results in a >10-fold increase in the risks for prematurity and low birthweight in twins vs singletons (adjusted odds ratio, 11.84; 95% confidence interval, 10.56-13.27 and adjusted odds ratio, 10.68; 95% confidence interval, 9.45-12.08, respectively). The use of donor oocytes is associated with increased risks for pregnancy-induced hypertension (adjusted odds ratio, 1.43; 95% confidence interval, 1.14-1.78) and prematurity (adjusted odds ratio, 1.43; 95% confidence interval, 1.11-1.83). The use of thawed embryos is associated with higher risks for pregnancy-induced hypertension (adjusted odds ratio, 1.30; 95% confidence interval, 1.08-1.57) and large-for-gestation birthweight (adjusted odds ratio, 1.74; 95% confidence interval, 1.45-2.08). Among singletons, in vitro fertilization is associated with increased risk of severe maternal morbidity compared with fertile deliveries (vaginal: adjusted odds ratio, 2.27; 95% confidence interval, 1.78-2.88; cesarean: adjusted odds ratio, 1.67; 95% confidence interval, 1.40-1.98, respectively) and subfertile deliveries (vaginal: adjusted odds ratio, 1.97; 95% confidence interval, 1.30-3.00; cesarean: adjusted odds ratio, 1.75; 95% confidence interval, 1.30-2.35, respectively). Among twins, cesarean in vitro fertilization deliveries have significantly greater severe maternal morbidity compared to cesarean fertile deliveries (adjusted odds ratio, 1.48; 95% confidence interval, 1.14-1.93). Subfertility, with or without in vitro fertilization or non-in vitro fertilization infertility treatments to achieve a pregnancy, is associated with increased risks of adverse maternal and perinatal outcomes. The major risk from in vitro fertilization treatments of multiple births (and the associated excess of perinatal morbidity) has been reduced over time, with fewer and better-quality embryos being transferred. Copyright © 2017. Published by Elsevier Inc.

  4. Tinnitus and Auditory Perception After a History of Noise Exposure: Relationship to Auditory Brainstem Response Measures.

    PubMed

    Bramhall, Naomi F; Konrad-Martin, Dawn; McMillan, Garnett P

    2018-01-15

    To determine whether auditory brainstem response (ABR) wave I amplitude is associated with measures of auditory perception in young people with normal distortion product otoacoustic emissions (DPOAEs) and varying levels of noise exposure history. Tinnitus, loudness tolerance, and speech perception ability were measured in 31 young military Veterans and 43 non-Veterans (19 to 35 years of age) with normal pure-tone thresholds and DPOAEs. Speech perception was evaluated in quiet using Northwestern University Auditory Test (NU-6) word lists and in background noise using the words in noise (WIN) test. Loudness discomfort levels were measured using 1-, 3-, 4-, and 6-kHz pulsed pure tones. DPOAEs and ABRs were collected in each participant to assess outer hair cell and auditory nerve function. The probability of reporting tinnitus in this sample increased by a factor of 2.0 per 0.1 µV decrease in ABR wave I amplitude (95% Bayesian confidence interval, 1.1 to 5.0) for males and by a factor of 2.2 (95% confidence interval, 1.0 to 6.4) for females after adjusting for sex and DPOAE levels. Similar results were obtained in an alternate model adjusted for pure-tone thresholds in addition to sex and DPOAE levels. No apparent relationship was found between wave I amplitude and either loudness tolerance or speech perception in quiet or noise. Reduced ABR wave I amplitude was associated with an increased risk of tinnitus, even after adjusting for DPOAEs and sex. In contrast, wave III and V amplitudes had little effect on tinnitus risk. This suggests that changes in peripheral input at the level of the inner hair cell or auditory nerve may lead to increases in central gain that give rise to the perception of tinnitus. Although the extent of synaptopathy in the study participants cannot be measured directly, these findings are consistent with the prediction that tinnitus may be a perceptual consequence of cochlear synaptopathy.

  5. Family environment, hobbies and habits as psychosocial predictors of survival for surgically treated patients with breast cancer.

    PubMed

    Tominaga, K; Andow, J; Koyama, Y; Numao, S; Kurokawa, E; Ojima, M; Nagai, M

    1998-01-01

    Many psychosocial factors have been reported to influence the duration of survival of breast cancer patients. We have studied how family members, hobbies and habits of the patients may alter their psychosocial status. Female patients with surgically treated breast cancer diagnosed between 1986 and 1995 at the Tochigi Cancer Center Hospital, who provided information on the above-mentioned factors, were used. Their subsequent physical status was followed up in the outpatients clinic. The Cox regression model was used to evaluate the relationship between the results of the factors examined and the duration of the patients' survival, adjusting for the patients' age, stage of disease at diagnosis and curability, as judged by the physician in charge after the treatment. The following factors were revealed to be significant with regard to the survival of surgically treated breast cancer patients: being a widow (hazard ratio 3.29; 95% confidence interval 1.32-8.20), having a hobby (hazard ratio 0.43; 95% confidence interval 0.23-0.82), number of hobbies (hazard ratio 0.64; 95% confidence interval 0.41-1.00), number of female children (hazard ratio 0.64; 95% confidence interval 0.42-0.98), smoker (hazard ratio 2.08; 95% confidence interval 1.02-4.26) and alcohol consumption (hazard ratio 0.10; 95% confidence interval 0.01-0.72). These results suggest that psychosocial factors, including the family environment, where patients receive emotional support from their spouse and children, hobbies and the patients' habits, may influence the duration of survival in surgically treated breast cancer patients.

  6. Taichi exercise for self-rated sleep quality in older people: a systematic review and meta-analysis.

    PubMed

    Du, Shizheng; Dong, Jianshu; Zhang, Heng; Jin, Shengji; Xu, Guihua; Liu, Zengxia; Chen, Lixia; Yin, Haiyan; Sun, Zhiling

    2015-01-01

    Self-reported sleep disorders are common in older adults, resulting in serious consequences. Non-pharmacological measures are important complementary interventions, among which Taichi exercise is a popular alternative. Some experiments have been performed; however, the effect of Taichi exercise in improving sleep quality in older people has yet to be validated by systematic review. Using systematic review and meta-analysis, this study aimed to examine the efficacy of Taichi exercise in promoting self-reported sleep quality in older adults. Systematic review and meta-analysis of randomized controlled studies. 4 English databases: Pubmed, Cochrane Library, Web of Science and CINAHL, and 4 Chinese databases: CBMdisc, CNKI, VIP, and Wanfang database were searched through December 2013. Two reviewers independently selected eligible trials, conducted critical appraisal of the methodological quality by using the quality appraisal criteria for randomized controlled studies recommended by Cochrane Handbook. A standardized data form was used to extract information. Meta-analysis was performed. Five randomized controlled studies met inclusion criteria. All suffered from some methodological flaws. The results of this study showed that Taichi has large beneficial effect on sleep quality in older people, as indicated by decreases in the global Pittsburgh Sleep Quality Index score [standardized mean difference=-0.87, 95% confidence intervals (95% confidence interval) (-1.25, -0.49)], as well as its sub-domains of subjective sleep quality [standardized mean difference=-0.83, 95% confidence interval (-1.08, -0.57)], sleep latency [standardized mean difference=-0.75, 95% confidence interval (-1.42, -0.07)], sleep duration [standardized mean difference=-0.55, 95% confidence interval (-0.90, -0.21)], habitual sleep efficiency [standardized mean difference=-0.49, 95% confidence interval (-0.74, -0.23)], sleep disturbance [standardized mean difference=-0.44, 95% confidence interval (-0.69, -0.19)], and daytime dysfunction [standardized mean difference=-0.34, 95% confidence interval (-0.59, -0.09)]. Daytime sleepiness improvement was also observed. Weak evidence shows that Taichi exercise has a beneficial effect in improving self-rated sleep quality for older adults, suggesting that Taichi could be an effective alternative and complementary approach to existing therapies for older people with sleep problems. More rigorous experimental studies are required. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Previous treatment, sputum-smear nonconversion, and suburban living: The risk factors of multidrug-resistant tuberculosis among Malaysians.

    PubMed

    Mohd Shariff, Noorsuzana; Shah, Shamsul Azhar; Kamaludin, Fadzilah

    2016-03-01

    The number of multidrug-resistant tuberculosis patients is increasing each year in many countries all around the globe. Malaysia has no exception in facing this burdensome health problem. We aimed to investigate the factors that contribute to the occurrence of multidrug-resistant tuberculosis among Malaysian tuberculosis patients. An unmatched case-control study was conducted among tuberculosis patients who received antituberculosis treatments from April 2013 until April 2014. Cases are those diagnosed as pulmonary tuberculosis patients clinically, radiologically, and/or bacteriologically, and who were confirmed to be resistant to both isoniazid and rifampicin through drug-sensitivity testing. On the other hand, pulmonary tuberculosis patients who were sensitive to all first-line antituberculosis drugs and were treated during the same time period served as controls. A total of 150 tuberculosis patients were studied, of which the susceptible cases were 120. Factors found to be significantly associated with the occurrence of multidrug-resistant tuberculosis are being Indian or Chinese (odds ratio 3.17, 95% confidence interval 1.04-9.68; and odds ratio 6.23, 95% confidence interval 2.24-17.35, respectively), unmarried (odds ratio 2.58, 95% confidence interval 1.09-6.09), living in suburban areas (odds ratio 2.58, 95% confidence interval 1.08-6.19), are noncompliant (odds ratio 4.50, 95% confidence interval 1.71-11.82), were treated previously (odds ratio 8.91, 95% confidence interval 3.66-21.67), and showed positive sputum smears at the 2nd (odds ratio 7.00, 95% confidence interval 2.46-19.89) and 6th months of treatment (odds ratio 17.96, 95% confidence interval 3.51-91.99). Living in suburban areas, positive sputum smears in the 2nd month of treatment, and was treated previously are factors that independently contribute to the occurrence of multidrug-resistant tuberculosis. Those with positive smears in the second month of treatment, have a history of previous treatment, and live in suburban areas are found to have a higher probability of becoming multidrug resistant. The results presented here may facilitate improvements in the screening and detection process of drug-resistant patients in Malaysia in the future. Copyright © 2015 Asian-African Society for Mycobacteriology. Published by Elsevier Ltd. All rights reserved.

  8. Stapled versus handsewn methods for colorectal anastomosis surgery.

    PubMed

    Lustosa, S A; Matos, D; Atallah, A N; Castro, A A

    2001-01-01

    Randomized controlled trials comparing stapled with handsewn colorectal anastomosis have not shown either technique to be superior, perhaps because individual studies lacked statistical power. A systematic review, with pooled analysis of results, might provide a more definitive answer. To compare the safety and effectiveness of stapled and handsewn colorectal anastomosis. The following primary hypothesis was tested: the stapled technique is more effective because it decreases the level of complications. The RCT register of the Cochrane Review Group was searched for any trial or reference to a relevant trial (published, in-press, or in progress). All publications were sought through computerised searches of EMBASE, LILACS, MEDLINE, the Cochrane Controlled Clinical Trials Database, and through letters to industrial companies and authors. There were no limits upon language, date, or other criteria. All randomized clinical trials (RCTs) in which stapled and handsewn colorectal anastomosis were compared. Adult patients submitted electively to colorectal anastomosis. Endoluminal circular stapler and handsewn colorectal anastomosis. a) Mortality b) Overall Anastomotic Dehiscence c) Clinical Anastomotic Dehiscence d) Radiological Anastomotic Dehiscence e) Stricture f) Anastomotic Haemorrhage g) Reoperation h) Wound Infection i) Anastomosis Duration j) Hospital Stay. Data were independently extracted by the two reviewers (SASL, DM) and cross-checked. The methodological quality of each trial was assessed by the same two reviewers. Details of the randomization (generation and concealment), blinding, whether an intention-to-treat analysis was done, and the number of patients lost to follow-up were recorded. The results of each RCT were summarised on an intention-to-treat basis in 2 x 2 tables for each outcome. External validity was defined by characteristics of the participants, the interventions and the outcomes. The RCTs were stratified according to the level of colorectal anastomosis. The Risk Difference method (random effects model) and NNT for dichotomous outcomes measures and weighted mean difference for continuous outcomes measures, with the corresponding 95% confidence interval, were presented in this review. Statistical heterogeneity was evaluated by using funnel plot and chi-square testing. Of the 1233 patients enrolled ( in 9 trials), 622 were treated with stapled, and 611 with manual, suture. The following main results were obtained: a) Mortality: result based on 901 patients; Risk Difference - 0.6% Confidence Interval -2.8% to +1.6%. b) Overall Dehiscence: result based on 1233 patients; Risk Difference 0.2%, 95% Confidence Interval -5.0% to +5.3%. c) Clinical Anastomotic Dehiscence : result based on 1233 patients; Risk Difference -1.4%, 95% Confidence Interval -5.2 to +2.3%. d) Radiological Anastomotic Dehiscence : result based on 825 patients; Risk Difference 1.2%, 95% Confidence Interval -4.8% to +7.3%. e) Stricture: result based on 1042 patients; Risk Difference 4.6%, 95% Confidence Interval 1.2% to 8.1%. Number needed to treat 17, 95% confidence interval 12 to 31. f) Anastomotic Hemorrhage: result based on 662 patients; Risk Difference 2.7%, 95% Confidence Interval - 0.1% to +5.5%. g) Reoperation: result based on 544 patients; Risk Difference 3.9%, 95% Confidence Interval 0.3% to 7.4%. h) Wound Infection: result based on 567 patients; Risk Difference 1.0%, 95% Confidence Interval -2.2% to +4.3%. i) Anastomosis duration: result based on one study (159 patients); Weighted Mean Difference -7.6 minutes, 95% Confidence Interval -12.9 to -2.2 minutes. j) Hospital Stay: result based on one study (159 patients), Weighted Mean Difference 2.0 days, 95% Confidence Interval -3.27 to +7.2 days. The evidence found was insufficient to demonstrate any superiority of stapled over handsewn techniques in colorectal anastomosis, regardless of the level of anastomosis.

  9. Etiological classifications of transient ischemic attacks: subtype classification by TOAST, CCS and ASCO--a pilot study.

    PubMed

    Amort, Margareth; Fluri, Felix; Weisskopf, Florian; Gensicke, Henrik; Bonati, Leo H; Lyrer, Philippe A; Engelter, Stefan T

    2012-01-01

    In patients with transient ischemic attacks (TIA), etiological classification systems are not well studied. The Trial of ORG 10172 in Acute Stroke Treatment (TOAST), the Causative Classification System (CCS), and the Atherosclerosis Small Vessel Disease Cardiac Source Other Cause (ASCO) classification may be useful to determine the underlying etiology. We aimed at testing the feasibility of each of the 3 systems. Furthermore, we studied and compared their prognostic usefulness. In a single-center TIA registry prospectively ascertained over 2 years, we applied 3 etiological classification systems. We compared the distribution of underlying etiologies, the rates of patients with determined versus undetermined etiology, and studied whether etiological subtyping distinguished TIA patients with versus without subsequent stroke or TIA within 3 months. The 3 systems were applicable in all 248 patients. A determined etiology with the highest level of causality was assigned similarly often with TOAST (35.9%), CCS (34.3%), and ASCO (38.7%). However, the frequency of undetermined causes differed significantly between the classification systems and was lowest for ASCO (TOAST: 46.4%; CCS: 37.5%; ASCO: 18.5%; p < 0.001). In TOAST, CCS, and ASCO, cardioembolism (19.4/14.5/18.5%) was the most common etiology, followed by atherosclerosis (11.7/12.9/14.5%). At 3 months, 33 patients (13.3%, 95% confidence interval 9.3-18.2%) had recurrent cerebral ischemic events. These were strokes in 13 patients (5.2%; 95% confidence interval 2.8-8.8%) and TIAs in 20 patients (8.1%, 95% confidence interval 5.0-12.2%). Patients with a determined etiology (high level of causality) had higher rates of subsequent strokes than those without a determined etiology [TOAST: 6.7% (95% confidence interval 2.5-14.1%) vs. 4.4% (95% confidence interval 1.8-8.9%); CSS: 9.3% (95% confidence interval 4.1-17.5%) vs. 3.1% (95% confidence interval 1.0-7.1%); ASCO: 9.4% (95% confidence interval 4.4-17.1%) vs. 2.6% (95% confidence interval 0.7-6.6%)]. However, this difference was only significant in the ASCO classification (p = 0.036). Using ASCO, there was neither an increase in risk of subsequent stroke among patients with incomplete diagnostic workup (at least one subtype scored 9) compared with patients with adequate workup (no subtype scored 9), nor among patients with multiple causes compared with patients with a single cause. In TIA patients, all etiological classification systems provided a similar distribution of underlying etiologies. The increase in stroke risk in TIA patients with determined versus undetermined etiology was most evident using the ASCO classification. Copyright © 2012 S. Karger AG, Basel.

  10. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  11. Bayesian stock assessment of Pacific herring in Prince William Sound, Alaska.

    PubMed

    Muradian, Melissa L; Branch, Trevor A; Moffitt, Steven D; Hulson, Peter-John F

    2017-01-01

    The Pacific herring (Clupea pallasii) population in Prince William Sound, Alaska crashed in 1993 and has yet to recover, affecting food web dynamics in the Sound and impacting Alaskan communities. To help researchers design and implement the most effective monitoring, management, and recovery programs, a Bayesian assessment of Prince William Sound herring was developed by reformulating the current model used by the Alaska Department of Fish and Game. The Bayesian model estimated pre-fishery spawning biomass of herring age-3 and older in 2013 to be a median of 19,410 mt (95% credibility interval 12,150-31,740 mt), with a 54% probability that biomass in 2013 was below the management limit used to regulate fisheries in Prince William Sound. The main advantages of the Bayesian model are that it can more objectively weight different datasets and provide estimates of uncertainty for model parameters and outputs, unlike the weighted sum-of-squares used in the original model. In addition, the revised model could be used to manage herring stocks with a decision rule that considers both stock status and the uncertainty in stock status.

  12. Bayesian stock assessment of Pacific herring in Prince William Sound, Alaska

    PubMed Central

    Moffitt, Steven D.; Hulson, Peter-John F.

    2017-01-01

    The Pacific herring (Clupea pallasii) population in Prince William Sound, Alaska crashed in 1993 and has yet to recover, affecting food web dynamics in the Sound and impacting Alaskan communities. To help researchers design and implement the most effective monitoring, management, and recovery programs, a Bayesian assessment of Prince William Sound herring was developed by reformulating the current model used by the Alaska Department of Fish and Game. The Bayesian model estimated pre-fishery spawning biomass of herring age-3 and older in 2013 to be a median of 19,410 mt (95% credibility interval 12,150–31,740 mt), with a 54% probability that biomass in 2013 was below the management limit used to regulate fisheries in Prince William Sound. The main advantages of the Bayesian model are that it can more objectively weight different datasets and provide estimates of uncertainty for model parameters and outputs, unlike the weighted sum-of-squares used in the original model. In addition, the revised model could be used to manage herring stocks with a decision rule that considers both stock status and the uncertainty in stock status. PMID:28222151

  13. In Silico Syndrome Prediction for Coronary Artery Disease in Traditional Chinese Medicine

    PubMed Central

    Lu, Peng; Chen, Jianxin; Zhao, Huihui; Gao, Yibo; Luo, Liangtao; Zuo, Xiaohan; Shi, Qi; Yang, Yiping; Yi, Jianqiang; Wang, Wei

    2012-01-01

    Coronary artery disease (CAD) is the leading causes of deaths in the world. The differentiation of syndrome (ZHENG) is the criterion of diagnosis and therapeutic in TCM. Therefore, syndrome prediction in silico can be improving the performance of treatment. In this paper, we present a Bayesian network framework to construct a high-confidence syndrome predictor based on the optimum subset, that is, collected by Support Vector Machine (SVM) feature selection. Syndrome of CAD can be divided into asthenia and sthenia syndromes. According to the hierarchical characteristics of syndrome, we firstly label every case three types of syndrome (asthenia, sthenia, or both) to solve several syndromes with some patients. On basis of the three syndromes' classes, we design SVM feature selection to achieve the optimum symptom subset and compare this subset with Markov blanket feature select using ROC. Using this subset, the six predictors of CAD's syndrome are constructed by the Bayesian network technique. We also design Naïve Bayes, C4.5 Logistic, Radial basis function (RBF) network compared with Bayesian network. In a conclusion, the Bayesian network method based on the optimum symptoms shows a practical method to predict six syndromes of CAD in TCM. PMID:22567030

  14. Topics in Bayesian Hierarchical Modeling and its Monte Carlo Computations

    NASA Astrophysics Data System (ADS)

    Tak, Hyung Suk

    The first chapter addresses a Beta-Binomial-Logit model that is a Beta-Binomial conjugate hierarchical model with covariate information incorporated via a logistic regression. Various researchers in the literature have unknowingly used improper posterior distributions or have given incorrect statements about posterior propriety because checking posterior propriety can be challenging due to the complicated functional form of a Beta-Binomial-Logit model. We derive data-dependent necessary and sufficient conditions for posterior propriety within a class of hyper-prior distributions that encompass those used in previous studies. Frequency coverage properties of several hyper-prior distributions are also investigated to see when and whether Bayesian interval estimates of random effects meet their nominal confidence levels. The second chapter deals with a time delay estimation problem in astrophysics. When the gravitational field of an intervening galaxy between a quasar and the Earth is strong enough to split light into two or more images, the time delay is defined as the difference between their travel times. The time delay can be used to constrain cosmological parameters and can be inferred from the time series of brightness data of each image. To estimate the time delay, we construct a Gaussian hierarchical model based on a state-space representation for irregularly observed time series generated by a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian approach jointly infers model parameters via a Gibbs sampler. We also introduce a profile likelihood of the time delay as an approximation of its marginal posterior distribution. The last chapter specifies a repelling-attracting Metropolis algorithm, a new Markov chain Monte Carlo method to explore multi-modal distributions in a simple and fast manner. This algorithm is essentially a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes repelling, followed by an uphill move in density that aims to make local modes attracting. The downhill move is achieved via a reciprocal Metropolis ratio so that the algorithm prefers downward movement. The uphill move does the opposite using the standard Metropolis ratio which prefers upward movement. This down-up movement in density increases the probability of a proposed move to a different mode.

  15. Differentiating Wheat Genotypes by Bayesian Hierarchical Nonlinear Mixed Modeling of Wheat Root Density.

    PubMed

    Wasson, Anton P; Chiu, Grace S; Zwart, Alexander B; Binns, Timothy R

    2017-01-01

    Ensuring future food security for a growing population while climate change and urban sprawl put pressure on agricultural land will require sustainable intensification of current farming practices. For the crop breeder this means producing higher crop yields with less resources due to greater environmental stresses. While easy gains in crop yield have been made mostly "above ground," little progress has been made "below ground"; and yet it is these root system traits that can improve productivity and resistance to drought stress. Wheat pre-breeders use soil coring and core-break counts to phenotype root architecture traits, with data collected on rooting density for hundreds of genotypes in small increments of depth. The measured densities are both large datasets and highly variable even within the same genotype, hence, any rigorous, comprehensive statistical analysis of such complex field data would be technically challenging. Traditionally, most attributes of the field data are therefore discarded in favor of simple numerical summary descriptors which retain much of the high variability exhibited by the raw data. This poses practical challenges: although plant scientists have established that root traits do drive resource capture in crops, traits that are more randomly (rather than genetically) determined are difficult to breed for. In this paper we develop a hierarchical nonlinear mixed modeling approach that utilizes the complete field data for wheat genotypes to fit, under the Bayesian paradigm, an "idealized" relative intensity function for the root distribution over depth. Our approach was used to determine heritability : how much of the variation between field samples was purely random vs. being mechanistically driven by the plant genetics? Based on the genotypic intensity functions, the overall heritability estimate was 0.62 (95% Bayesian confidence interval was 0.52 to 0.71). Despite root count profiles that were statistically very noisy, our approach led to denoised profiles which exhibited rigorously discernible phenotypic traits. Profile-specific traits could be representative of a genotype, and thus, used as a quantitative tool to associate phenotypic traits with specific genotypes. This would allow breeders to select for whole root system distributions appropriate for sustainable intensification, and inform policy for mitigating crop yield risk and food insecurity.

  16. The Use of a Bayesian Hierarchy to Develop and Validate a Co-Morbidity Score to Predict Mortality for Linked Primary and Secondary Care Data from the NHS in England

    PubMed Central

    Card, Tim R.; West, Joe

    2016-01-01

    Background We have assessed whether the linkage between routine primary and secondary care records provided an opportunity to develop an improved population based co-morbidity score with the combined information on co-morbidities from both health care settings. Methods We extracted all people older than 20 years at the start of 2005 within the linkage between the Hospital Episodes Statistics, Clinical Practice Research Datalink, and Office for National Statistics death register in England. A random 50% sample was used to identify relevant diagnostic codes using a Bayesian hierarchy to share information between similar Read and ICD 10 code groupings. Internal validation of the score was performed in the remaining 50% and discrimination was assessed using Harrell’s C statistic. Comparisons were made over time, age, and consultation rate with the Charlson and Elixhauser indexes. Results 657,264 people were followed up from the 1st January 2005. 98 groupings of codes were derived from the Bayesian hierarchy, and 37 had an adjusted weighting of greater than zero in the Cox proportional hazards model. 11 of these groupings had a different weighting dependent on whether they were coded from hospital or primary care. The C statistic reduced from 0.88 (95% confidence interval 0.88–0.88) in the first year of follow up, to 0.85 (0.85–0.85) including all 5 years. When we stratified the linked score by consultation rate the association with mortality remained consistent, but there was a significant interaction with age, with improved discrimination and fit in those under 50 years old (C = 0.85, 0.83–0.87) compared to the Charlson (C = 0.79, 0.77–0.82) or Elixhauser index (C = 0.81, 0.79–0.83). Conclusions The use of linked population based primary and secondary care data developed a co-morbidity score that had improved discrimination, particularly in younger age groups, and had a greater effect when adjusting for co-morbidity than existing scores. PMID:27788230

  17. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  18. Exact calculation of distributions on integers, with application to sequence alignment.

    PubMed

    Newberg, Lee A; Lawrence, Charles E

    2009-01-01

    Computational biology is replete with high-dimensional discrete prediction and inference problems. Dynamic programming recursions can be applied to several of the most important of these, including sequence alignment, RNA secondary-structure prediction, phylogenetic inference, and motif finding. In these problems, attention is frequently focused on some scalar quantity of interest, a score, such as an alignment score or the free energy of an RNA secondary structure. In many cases, score is naturally defined on integers, such as a count of the number of pairing differences between two sequence alignments, or else an integer score has been adopted for computational reasons, such as in the test of significance of motif scores. The probability distribution of the score under an appropriate probabilistic model is of interest, such as in tests of significance of motif scores, or in calculation of Bayesian confidence limits around an alignment. Here we present three algorithms for calculating the exact distribution of a score of this type; then, in the context of pairwise local sequence alignments, we apply the approach so as to find the alignment score distribution and Bayesian confidence limits.

  19. Robust Bayesian clustering.

    PubMed

    Archambeau, Cédric; Verleysen, Michel

    2007-01-01

    A new variational Bayesian learning algorithm for Student-t mixture models is introduced. This algorithm leads to (i) robust density estimation, (ii) robust clustering and (iii) robust automatic model selection. Gaussian mixture models are learning machines which are based on a divide-and-conquer approach. They are commonly used for density estimation and clustering tasks, but are sensitive to outliers. The Student-t distribution has heavier tails than the Gaussian distribution and is therefore less sensitive to any departure of the empirical distribution from Gaussianity. As a consequence, the Student-t distribution is suitable for constructing robust mixture models. In this work, we formalize the Bayesian Student-t mixture model as a latent variable model in a different way from Svensén and Bishop [Svensén, M., & Bishop, C. M. (2005). Robust Bayesian mixture modelling. Neurocomputing, 64, 235-252]. The main difference resides in the fact that it is not necessary to assume a factorized approximation of the posterior distribution on the latent indicator variables and the latent scale variables in order to obtain a tractable solution. Not neglecting the correlations between these unobserved random variables leads to a Bayesian model having an increased robustness. Furthermore, it is expected that the lower bound on the log-evidence is tighter. Based on this bound, the model complexity, i.e. the number of components in the mixture, can be inferred with a higher confidence.

  20. A Bayesian approach to meta-analysis of plant pathology studies.

    PubMed

    Mila, A L; Ngugi, H K

    2011-01-01

    Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.

  1. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  2. Bayesian analysis of factors associated with fibromyalgia syndrome subjects

    NASA Astrophysics Data System (ADS)

    Jayawardana, Veroni; Mondal, Sumona; Russek, Leslie

    2015-01-01

    Factors contributing to movement-related fear were assessed by Russek, et al. 2014 for subjects with Fibromyalgia (FM) based on the collected data by a national internet survey of community-based individuals. The study focused on the variables, Activities-Specific Balance Confidence scale (ABC), Primary Care Post-Traumatic Stress Disorder screen (PC-PTSD), Tampa Scale of Kinesiophobia (TSK), a Joint Hypermobility Syndrome screen (JHS), Vertigo Symptom Scale (VSS-SF), Obsessive-Compulsive Personality Disorder (OCPD), Pain, work status and physical activity dependent from the "Revised Fibromyalgia Impact Questionnaire" (FIQR). The study presented in this paper revisits same data with a Bayesian analysis where appropriate priors were introduced for variables selected in the Russek's paper.

  3. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    PubMed

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Confidence Leak in Perceptual Decision Making.

    PubMed

    Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan

    2015-11-01

    People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.

  5. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  6. Fasting glucose levels, incident diabetes, subclinical atherosclerosis and cardiovascular events in apparently healthy adults: A 12-year longitudinal study.

    PubMed

    Sitnik, Debora; Santos, Itamar S; Goulart, Alessandra C; Staniak, Henrique L; Manson, JoAnn E; Lotufo, Paulo A; Bensenor, Isabela M

    2016-11-01

    We aimed to study the association between fasting plasma glucose, diabetes incidence and cardiovascular burden after 10-12 years. We evaluated diabetes and cardiovascular events incidences, carotid intima-media thickness and coronary artery calcium scores in ELSA-Brasil (the Brazilian Longitudinal Study of Adult Health) baseline (2008-2010) of 1536 adults without diabetes in 1998. We used regression models to estimate association with carotid intima-media thickness (in mm), coronary artery calcium scores (in Agatston points) and cardiovascular events according to fasting plasma glucose in 1998. Adjusted diabetes incidence rate was 9.8/1000 person-years (95% confidence interval: 7.7-13.6/1000 person-years). Incident diabetes was positively associated with higher fasting plasma glucose. Fasting plasma glucose levels 110-125 mg/dL were associated with higher carotid intima-media thickness (β = 0.028; 95% confidence interval: 0.003-0.053). Excluding those with incident diabetes, there was a borderline association between higher carotid intima-media thickness and fasting plasma glucose 110-125 mg/dL (β = 0.030; 95% confidence interval: -0.005 to 0.065). Incident diabetes was associated with higher carotid intima-media thickness (β = 0.034; 95% confidence interval: 0.015-0.053), coronary artery calcium scores ⩾400 (odds ratio = 2.84; 95% confidence interval: 1.17-6.91) and the combined outcome of a coronary artery calcium scores ⩾400 or incident cardiovascular event (odds ratio = 3.50; 95% confidence interval: 1.60-7.65). In conclusion, fasting plasma glucose in 1998 and incident diabetes were associated with higher cardiovascular burden. © The Author(s) 2016.

  7. Prevalence of infections among residents of Residential Care Homes for the Elderly in Hong Kong.

    PubMed

    Choy, C Sm; Chen, H; Yau, C Sw; Hsu, E K; Chik, N Y; Wong, A Ty

    2016-08-01

    A point prevalence study was conducted to study the epidemiology of common infections among residents in Residential Care Homes for the Elderly in Hong Kong and their associated factors. Residential Care Homes for the Elderly in Hong Kong were selected by stratified single-stage cluster random sampling. All residents aged 65 years or above from the recruited homes were surveyed. Infections were identified using standardised definitions. Demographic and health information-including medical history, immunisation record, antibiotic use, and activities of daily living (as measured by Barthel Index)-was collected by a survey team to determine any associated factors. Data were collected from 3857 residents in 46 Residential Care Homes for the Elderly from February to May 2014. A total of 105 residents had at least one type of infection based on the survey definition. The overall prevalence of all infections was 2.7% (95% confidence interval, 2.2%-3.4%). The three most common infections were of the respiratory tract (1.3%; 95% confidence interval, 0.9%-1.9%), skin and soft tissue (0.7%; 95% confidence interval, 0.5%-1.0%), and urinary tract (0.5%; 95% confidence interval, 0.3%-0.9%). Total dependence in activities of daily living, as indicated by low Barthel Index score of 0 to 20 (odds ratio=3.0; 95% confidence interval, 1.4-6.2), and presence of a wound or stoma (odds ratio=2.7; 95% confidence interval, 1.4-4.9) were significantly associated with presence of infection. This survey provides information about infections among residents in Residential Care Homes for the Elderly in the territory. Local data enable us to understand the burden of infections and formulate targeted measures for prevention.

  8. Influence of Objective Three-Dimensional Measures and Movement Images on Surgeon Treatment Planning for Lip Revision Surgery

    PubMed Central

    Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.

    2013-01-01

    Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676

  9. Lower hospital mortality and complications after pediatric hematopoietic stem cell transplantation.

    PubMed

    Bratton, Susan L; Van Duker, Heather; Statler, Kimberly D; Pulsipher, Michael A; McArthur, Jennifer; Keenan, Heather T

    2008-03-01

    To assess protective and risk factors for mortality among pediatric patients during initial care after hematopoietic stem cell transplantation (HSCT) and to evaluate changes in hospital mortality. Retrospective cohort using the 1997, 2000, and 2003 Kids Inpatient Database, a probabilistic sample of children hospitalized in the United States with a procedure code for HSCT. Hospitalized patients in the United States submitted to the database. Age, <19 yrs. None. Hospital mortality significantly decreased from 12% in 1997 to 6% in 2003. Source of stem cells changed with increased use of cord blood. Rates of sepsis, graft versus host disease, and mechanical ventilation significantly decreased. Compared with autologous HSCT, patients who received an allogenic HSCT without T-cell depletion were more likely to die (adjusted odds ratio, 2.4; 95% confidence interval, 1.5, 3.9), while children who received cord blood HSCT were at the greatest risk of hospital death (adjusted odds ratio, 4.8; 95% confidence interval, 2.6, 9.1). Mechanical ventilation (adjusted odds ratio, 26.32; 95% confidence interval, 16.3-42.2), dialysis (adjusted odds ratio, 12.9; 95% confidence interval, 4.7-35.4), and sepsis (adjusted odds ratio, 3.9; 95% confidence interval, 2.5-6.1) were all independently associated with death, while care in 2003 was associated with decreased risk (adjusted odds ratio, 0.4; 95% confidence interval, 0.2-0.7) of death. Hospital mortality after HSCT in children decreased over time as did complications including need for mechanical ventilation, graft versus host disease, and sepsis. Prevention of complications is essential as the need for invasive support continues to be associated with high mortality risk.

  10. High prevalence of refractive errors in a rural population: 'Nooravaran Salamat' Mobile Eye Clinic experience.

    PubMed

    Hashemi, Hassan; Rezvan, Farhad; Ostadimoghaddam, Hadi; Abdollahi, Majid; Hashemi, Maryam; Khabazkhoob, Mehdi

    2013-01-01

    The prevalence of myopia and hyperopia and determinants were determined in a rural population of Iran. Population-based cross-sectional study. Using random cluster sampling, 13 of the 83 villages of Khaf County in the north east of Iran were selected. Data from 2001 people over the age of 15 years were analysed. Visual acuity measurement, non-cycloplegic refraction and eye examinations were done at the Mobile Eye Clinic. The prevalence of myopia and hyperopia based on spherical equivalent worse than -0.5 dioptre and +0.5 dioptre, respectively. The prevalence of myopia, hyperopia and anisometropia in the total study sample was 28% (95% confidence interval: 25.9-30.2), 19.2% (95% confidence interval: 17.3-21.1), and 11.5% (95% confidence interval: 10.0-13.1), respectively. In the over 40 population, the prevalence of myopia and hyperopia was 32.5% (95% confidence interval: 28.9-36.1) and 27.9% (95% confidence interval: 24.5-31.3), respectively. In the multiple regression model for this group, myopia strongly correlated with cataract (odds ratio = 1.98 and 95% confidence interval: 1.33-2.93), and hyperopia only correlated with age (P < 0.001). The prevalence of high myopia and high hyperopia was 1.5% and 4.6%. In the multiple regression model, anisometropia significantly correlated with age (odds ratio = 1.04) and cataract (odds ratio = 5.2) (P < 0.001). The prevalence of myopia and anisometropia was higher than that in previous studies in urban population of Iran, especially in the elderly. Cataract was the only variable that correlated with myopia and anisometropia. © 2013 The Authors. Clinical and Experimental Ophthalmology © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  11. The Association Between Maternal Age and Cerebral Palsy Risk Factors.

    PubMed

    Schneider, Rilla E; Ng, Pamela; Zhang, Xun; Andersen, John; Buckley, David; Fehlings, Darcy; Kirton, Adam; Wood, Ellen; van Rensburg, Esias; Shevell, Michael I; Oskoui, Maryam

    2018-05-01

    Advanced maternal age is associated with higher frequencies of antenatal and perinatal conditions, as well as a higher risk of cerebral palsy in offspring. We explore the association between maternal age and specific cerebral palsy risk factors. Data were extracted from the Canadian Cerebral Palsy Registry. Maternal age was categorized as ≥35 years of age and less than 20 years of age at the time of birth. Chi-square and multivariate logistic regressions were performed to calculate odds ratios and their 95% confidence intervals. The final sample consisted of 1391 children with cerebral palsy, with 19% of children having mothers aged 35 or older and 4% of children having mothers below the age of 20. Univariate analyses showed that mothers aged 35 or older were more likely to have gestational diabetes (odds ratio 1.9, 95% confidence interval 1.3 to 2.8), to have a history of miscarriage (odds ratio 1.8, 95% confidence interval 1.3 to 2.4), to have undergone fertility treatments (odds ratio 2.4, 95% confidence interval 1.5 to 3.9), and to have delivered by Caesarean section (odds ratio 1.6, 95% confidence interval 1.2 to 2.2). These findings were supported by multivariate analyses. Children with mothers below the age of 20 were more likely to have a congenital malformation (odds ratio 2.4, 95% confidence interval 1.4 to 4.2), which is also supported by multivariate analysis. The risk factor profiles of children with cerebral palsy vary by maternal age. Future studies are warranted to further our understanding of the compound causal pathways leading to cerebral palsy and the observed greater prevalence of cerebral palsy with increasing maternal age. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Priorities for treatment, care and information if faced with serious illness: a comparative population-based survey in seven European countries.

    PubMed

    Higginson, Irene J; Gomes, Barbara; Calanzani, Natalia; Gao, Wei; Bausewein, Claudia; Daveson, Barbara A; Deliens, Luc; Ferreira, Pedro L; Toscani, Franco; Gysels, Marjolein; Ceulemans, Lucas; Simon, Steffen T; Cohen, Joachim; Harding, Richard

    2014-02-01

    Health-care costs are growing, with little population-based data about people's priorities for end-of-life care, to guide service development and aid discussions. We examined variations in people's priorities for treatment, care and information across seven European countries. Telephone survey of a random sample of households; we asked respondents their priorities if 'faced with a serious illness, like cancer, with limited time to live' and used multivariable logistic regressions to identify associated factors. Members of the general public aged ≥ 16 years residing in England, Flanders, Germany, Italy, the Netherlands, Portugal and Spain. In total, 9344 individuals were interviewed. Most people chose 'improve quality of life for the time they had left', ranging from 57% (95% confidence interval: 55%-60%, Italy) to 81% (95% confidence interval: 79%-83%, Spain). Only 2% (95% confidence interval: 1%-3%, England) to 6% (95% confidence interval: 4%-7%, Flanders) said extending life was most important, and 15% (95% confidence interval: 13%-17%, Spain) to 40% (95% confidence interval: 37%-43%, Italy) said quality and extension were equally important. Prioritising quality of life was associated with higher education in all countries (odds ratio = 1.3 (Flanders) to 7.9 (Italy)), experience of caregiving or bereavement (England, Germany, Portugal), prioritising pain/symptom control over having a positive attitude and preferring death in a hospice/palliative care unit. Those prioritising extending life had the highest home death preference of all groups. Health status did not affect priorities. Across all countries, extending life was prioritised by a minority, regardless of health status. Treatment and care needs to be reoriented with patient education and palliative care becoming mainstream for serious conditions such as cancer.

  13. Air pollution attributable postneonatal infant mortality in U.S. metropolitan areas: a risk assessment study

    PubMed Central

    Kaiser, Reinhard; Romieu, Isabelle; Medina, Sylvia; Schwartz, Joel; Krzyzanowski, Michal; Künzli, Nino

    2004-01-01

    Background The impact of outdoor air pollution on infant mortality has not been quantified. Methods Based on exposure-response functions from a U.S. cohort study, we assessed the attributable risk of postneonatal infant mortality in 23 U.S. metropolitan areas related to particulate matter <10 μm in diameter (PM10) as a surrogate of total air pollution. Results The estimated proportion of all cause mortality, sudden infant death syndrome (normal birth weight infants only) and respiratory disease mortality (normal birth weight) attributable to PM10 above a chosen reference value of 12.0 μg/m3 PM10 was 6% (95% confidence interval 3–11%), 16% (95% confidence interval 9–23%) and 24% (95% confidence interval 7–44%), respectively. The expected number of infant deaths per year in the selected areas was 106 (95% confidence interval 53–185), 79 (95% confidence interval 46–111) and 15 (95% confidence interval 5–27), respectively. Approximately 75% of cases were from areas where the current levels are at or below the new U.S. PM2.5 standard of 15 μg/m3 (equivalent to 25 μg/m3 PM10). In a country where infant mortality rates and air pollution levels are relatively low, ambient air pollution as measured by particulate matter contributes to a substantial fraction of infant death, especially for those due to sudden infant death syndrome and respiratory disease. Even if all counties would comply to the new PM2.5 standard, the majority of the estimated burden would remain. Conclusion Given the inherent limitations of risk assessments, further studies are needed to support and quantify the relationship between infant mortality and air pollution. PMID:15128459

  14. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  15. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.

    2011-04-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.

  16. Are Vancomycin Trough Concentrations Adequate for Optimal Dosing?

    PubMed Central

    Youn, Gilmer; Jones, Brenda; Jelliffe, Roger W.; Drusano, George L.; Rodvold, Keith A.; Lodise, Thomas P.

    2014-01-01

    The current vancomycin therapeutic guidelines recommend the use of only trough concentrations to manage the dosing of adults with Staphylococcus aureus infections. Both vancomycin efficacy and toxicity are likely to be related to the area under the plasma concentration-time curve (AUC). We assembled richly sampled vancomycin pharmacokinetic data from three studies comprising 47 adults with various levels of renal function. With Pmetrics, the nonparametric population modeling package for R, we compared AUCs estimated from models derived from trough-only and peak-trough depleted versions of the full data set and characterized the relationship between the vancomycin trough concentration and AUC. The trough-only and peak-trough depleted data sets underestimated the true AUCs compared to the full model by a mean (95% confidence interval) of 23% (11 to 33%; P = 0.0001) and 14% (7 to 19%; P < 0.0001), respectively. In contrast, using the full model as a Bayesian prior with trough-only data allowed 97% (93 to 102%; P = 0.23) accurate AUC estimation. On the basis of 5,000 profiles simulated from the full model, among adults with normal renal function and a therapeutic AUC of ≥400 mg · h/liter for an organism for which the vancomycin MIC is 1 mg/liter, approximately 60% are expected to have a trough concentration below the suggested minimum target of 15 mg/liter for serious infections, which could result in needlessly increased doses and a risk of toxicity. Our data indicate that adjustment of vancomycin doses on the basis of trough concentrations without a Bayesian tool results in poor achievement of maximally safe and effective drug exposures in plasma and that many adults can have an adequate vancomycin AUC with a trough concentration of <15 mg/liter. PMID:24165176

  17. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  18. A Bayesian sequential processor approach to spectroscopic portal system decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sale, K; Candy, J; Breitfeller, E

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less

  19. Application of Bayesian model averaging to measurements of the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Parkinson, David; Liddle, Andrew R.

    2010-11-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940

  20. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  1. Fuzzy Intervals for Designing Structural Signature: An Application to Graphic Symbol Recognition

    NASA Astrophysics Data System (ADS)

    Luqman, Muhammad Muzzamil; Delalandre, Mathieu; Brouard, Thierry; Ramel, Jean-Yves; Lladós, Josep

    The motivation behind our work is to present a new methodology for symbol recognition. The proposed method employs a structural approach for representing visual associations in symbols and a statistical classifier for recognition. We vectorize a graphic symbol, encode its topological and geometrical information by an attributed relational graph and compute a signature from this structural graph. We have addressed the sensitivity of structural representations to noise, by using data adapted fuzzy intervals. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set. The Bayesian network is deployed in a supervised learning scenario for recognizing query symbols. The method has been evaluated for robustness against degradations & deformations on pre-segmented 2D linear architectural & electronic symbols from GREC databases, and for its recognition abilities on symbols with context noise i.e. cropped symbols.

  2. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  3. Four Bootstrap Confidence Intervals for the Binomial-Error Model.

    ERIC Educational Resources Information Center

    Lin, Miao-Hsiang; Hsiung, Chao A.

    1992-01-01

    Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)

  4. Teach a Confidence Interval for the Median in the First Statistics Course

    ERIC Educational Resources Information Center

    Howington, Eric B.

    2017-01-01

    Few introductory statistics courses consider statistical inference for the median. This article argues in favour of adding a confidence interval for the median to the first statistics course. Several methods suitable for introductory statistics students are identified and briefly reviewed.

  5. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  6. Multiscale analysis of neural spike trains.

    PubMed

    Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin

    2014-01-30

    This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  8. Increased calcium supplementation is associated with morbidity and mortality in the infant postoperative cardiac patient.

    PubMed

    Dyke, Peter C; Yates, Andrew R; Cua, Clifford L; Hoffman, Timothy M; Hayes, John; Feltes, Timothy F; Springer, Michelle A; Taeed, Roozbeh

    2007-05-01

    The purpose of this study was to assess the association of calcium replacement therapy with morbidity and mortality in infants after cardiac surgery involving cardiopulmonary bypass. Retrospective chart review. The cardiac intensive care unit at a tertiary care children's hospital. Infants undergoing cardiac surgery involving cardiopulmonary bypass between October 2002 and August 2004. None. Total calcium replacement (mg/kg calcium chloride given) for the first 72 postoperative hours was measured. Morbidity and mortality data were collected. The total volume of blood products given during the first 72 hrs was recorded. Infants with confirmed chromosomal deletions at the 22q11 locus were noted. Correlation and logistic regression analyses were used to generate odds ratios and 95% confidence intervals, with p < .05 being significant. One hundred seventy-one infants met inclusion criteria. Age was 4 +/- 3 months and weight was 4.9 +/- 1.7 kg at surgery. Six infants had deletions of chromosome 22q11. Infants who weighed less required more calcium replacement (r = -.28, p < .001). Greater calcium replacement correlated with a longer intensive care unit length of stay (r = .27, p < .001) and a longer total hospital length of stay (r = .23, p = .002). Greater calcium replacement was significantly associated with morbidity (liver dysfunction [odds ratio, 3.9; confidence interval, 2.1-7.3; p < .001], central nervous system complication [odds ratio, 1.8; confidence interval, 1.1-3.0; p = .02], infection [odds ratio, 1.5; confidence interval, 1.0-2.2; p < .04], extracorporeal membrane oxygenation [odds ratio, 5.0; confidence interval, 2.3-10.6; p < .001]) and mortality (odds ratio, 5.8; confidence interval, 5.8-5.9; p < .001). Greater calcium replacement was not associated with renal insufficiency (odds ratio, 1.5; confidence interval, 0.9-2.3; p = .07). Infants with >1 sd above the mean of total calcium replacement received on average fewer blood products than the total study population. Greater calcium replacement is associated with increasing morbidity and mortality. Further investigation of the etiology and therapy of hypocalcemia in this population is warranted.

  9. WITHDRAWN: Amnioinfusion for meconium-stained liquor in labour.

    PubMed

    Hofmeyr, G Justus

    2009-01-21

    Amnioinfusion aims to prevent or relieve umbilical cord compression during labour by infusing a solution into the uterine cavity. It is also thought to dilute meconium when present in the amniotic fluid and so reduce the risk of meconium aspiration. However, it may be that the mechanism of effect is that it corrects oligohydramnios (reduced amniotic fluid), for which thick meconium staining is a marker. The objective of this review was to assess the effects of amnioinfusion for meconium-stained liquor on perinatal outcome. The Cochrane Pregnancy and Childbirth Group trials register (October 2001) and the Cochrane Controlled Trials Register (Issue 3, 2001) were searched. Randomised trials comparing amnioinfusion with no amnioinfusion for women in labour with moderate or thick meconium-staining of the amniotic fluid. Eligibility and trial quality were assessed by one reviewer. Twelve studies, most involving small numbers of participants, were included. Under standard perinatal surveillance, amnioinfusion was associated with a reduction in the following: heavy meconium staining of the liquor (relative risk 0.03, 95% confidence interval 0.01 to 0.15); variable fetal heart rate deceleration (relative risk 0.65, 95% confidence interval 0.49 to 0.88); and reduced caesarean section overall (relative risk 0.82, 95% confidence interval 0.69 to 1.97). No perinatal deaths were reported. Under limited perinatal surveillance, amnioinfusion was associated with a reduction in the following: meconium aspiration syndrome (relative risk 0.24, 95% confidence interval 0.12 to 0.48); neonatal hypoxic ischaemic encephalopathy (relative risk 0.07, 95% confidence interval 0.01 to 0.56) and neonatal ventilation or intensive care unit admission (relative risk 0.56, 95% confidence interval 0.39 to 0.79); there was a trend towards reduced perinatal mortality (relative risk 0.34, 95% confidence interval 0.11 to 1.06). Amnioinfusion is associated with improvements in perinatal outcome, particularly in settings where facilities for perinatal surveillance are limited. The trials reviewed are too small to address the possibility of rare but serious maternal adverse effects of amnioinfusion.

  10. Reliability of clinical findings and magnetic resonance imaging for the diagnosis of chondromalacia patellae.

    PubMed

    Pihlajamäki, Harri K; Kuikka, Paavo-Ilari; Leppänen, Vesa-Veikko; Kiuru, Martti J; Mattila, Ville M

    2010-04-01

    This diagnostic study was performed to determine the correlation between anterior knee pain and chondromalacia patellae and to define the reliability of magnetic resonance imaging for the diagnosis of chondromalacia patellae. Fifty-six young adults (median age, 19.5 years) with anterior knee pain had magnetic resonance imaging of the knee followed by arthroscopy. The patellar chondral lesions identified by magnetic resonance imaging were compared with the arthroscopic findings. Arthroscopy confirmed the presence of chondromalacia patellae in twenty-five (45%) of the fifty-six knees, a synovial plica in twenty-five knees, a meniscal tear in four knees, and a femorotibial chondral lesion in four knees; normal anatomy was seen in six knees. No association was found between the severity of the chondromalacia patellae seen at arthroscopy and the clinical symptoms of anterior knee pain syndrome (p = 0.83). The positive predictive value for the ability of 1.0-T magnetic resonance imaging to detect chondromalacia patellae was 75% (95% confidence interval, 53% to 89%), the negative predictive value was 72% (95% confidence interval, 56% to 84%), the sensitivity was 60% (95% confidence interval, 41% to 77%), the specificity was 84% (95% confidence interval, 67% to 93%), and the diagnostic accuracy was 73% (95% confidence interval, 60% to 83%). The sensitivity was 13% (95% confidence interval, 2% to 49%) for grade-I lesions and 83% (95% confidence interval, 59% to 94%) for grade-II, III, or IV lesions. Chondromalacia patellae cannot be diagnosed on the basis of symptoms or with current physical examination methods. The present study demonstrated no correlation between the severity of chondromalacia patellae and the clinical symptoms of anterior knee pain syndrome. Thus, symptoms of anterior knee pain syndrome should not be used as an indication for knee arthroscopy. The sensitivity of 1.0-T magnetic resonance imaging was low for grade-I lesions but considerably higher for more severe (grade-II, III, or IV) lesions. Magnetic resonance imaging may be considered an accurate diagnostic tool for identification of more severe cases of chondromalacia patellae.

  11. Asymptomatic Intradialytic Supraventricular Arrhythmias and Adverse Outcomes in Patients on Hemodialysis

    PubMed Central

    Pérez de Prado, Armando; López-Gómez, Juan M.; Quiroga, Borja; Goicoechea, Marian; García-Prieto, Ana; Torres, Esther; Reque, Javier; Luño, José

    2016-01-01

    Background and objectives Supraventricular arrhythmias are associated with high morbidity and mortality. Nevertheless, this condition has received little attention in patients on hemodialysis. The objective of this study was to analyze the incidence of intradialysis supraventricular arrhythmia and its long–term prognostic value. Design, setting, participants, & measurements We designed an observational and prospective study in a cohort of patients on hemodialysis with a 10-year follow-up period. All patients were recruited for study participation and were not recruited for clinical indications. The study population comprised 77 patients (42 men and 35 women; mean age =58±15 years old) with sinus rhythm monitored using a Holter electrocardiogram over six consecutive hemodialysis sessions at recruitment. Results Hypertension was present in 68.8% of patients, and diabetes was present in 29.9% of patients. Supraventricular arrhythmias were recorded in 38 patients (49.3%); all of these were short, asymptomatic, and self-limiting. Age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08) and right atrial enlargement (hazard ratio, 4.29; 95% confidence interval, 1.30 to 14.09) were associated with supraventricular arrhythmia in the multivariate analysis. During a median follow-up of 40 months, 57 patients died, and cardiovascular disease was the main cause of death (52.6%). The variables associated with all-cause mortality in the Cox model were age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08), C-reactive protein (hazard ratio, 1.04 per 1 mg/L; 95% confidence interval, 1.00 to 1.08), and supraventricular arrhythmia (hazard ratio, 3.21; 95% confidence interval, 1.29 to 7.96). Patients with supraventricular arrhythmia also had a higher risk of nonfatal cardiovascular events (hazard ratio, 4.32; 95% confidence interval, 2.11 to 8.83) and symptomatic atrial fibrillation during follow-up (hazard ratio, 17.19; 95% confidence interval, 2.03 to 145.15). Conclusions The incidence of intradialysis supraventricular arrhythmia was high in our hemodialysis study population. Supraventricular arrhythmias were short, asymptomatic, and self-limiting, and although silent, these arrhythmias were independently associated with mortality and cardiovascular events. PMID:27697781

  12. Asymptomatic Intradialytic Supraventricular Arrhythmias and Adverse Outcomes in Patients on Hemodialysis.

    PubMed

    Verde, Eduardo; Pérez de Prado, Armando; López-Gómez, Juan M; Quiroga, Borja; Goicoechea, Marian; García-Prieto, Ana; Torres, Esther; Reque, Javier; Luño, José

    2016-12-07

    Supraventricular arrhythmias are associated with high morbidity and mortality. Nevertheless, this condition has received little attention in patients on hemodialysis. The objective of this study was to analyze the incidence of intradialysis supraventricular arrhythmia and its long-term prognostic value. We designed an observational and prospective study in a cohort of patients on hemodialysis with a 10-year follow-up period. All patients were recruited for study participation and were not recruited for clinical indications. The study population comprised 77 patients (42 men and 35 women; mean age =58±15 years old) with sinus rhythm monitored using a Holter electrocardiogram over six consecutive hemodialysis sessions at recruitment. Hypertension was present in 68.8% of patients, and diabetes was present in 29.9% of patients. Supraventricular arrhythmias were recorded in 38 patients (49.3%); all of these were short, asymptomatic, and self-limiting. Age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08) and right atrial enlargement (hazard ratio, 4.29; 95% confidence interval, 1.30 to 14.09) were associated with supraventricular arrhythmia in the multivariate analysis. During a median follow-up of 40 months, 57 patients died, and cardiovascular disease was the main cause of death (52.6%). The variables associated with all-cause mortality in the Cox model were age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08), C-reactive protein (hazard ratio, 1.04 per 1 mg/L; 95% confidence interval, 1.00 to 1.08), and supraventricular arrhythmia (hazard ratio, 3.21; 95% confidence interval, 1.29 to 7.96). Patients with supraventricular arrhythmia also had a higher risk of nonfatal cardiovascular events (hazard ratio, 4.32; 95% confidence interval, 2.11 to 8.83) and symptomatic atrial fibrillation during follow-up (hazard ratio, 17.19; 95% confidence interval, 2.03 to 145.15). The incidence of intradialysis supraventricular arrhythmia was high in our hemodialysis study population. Supraventricular arrhythmias were short, asymptomatic, and self-limiting, and although silent, these arrhythmias were independently associated with mortality and cardiovascular events. Copyright © 2016 by the American Society of Nephrology.

  13. Long-term Results of an Obesity Program in an Ethnically Diverse Pediatric Population

    PubMed Central

    Nowicka, Paulina; Shaw, Melissa; Yu, Sunkyung; Dziura, James; Chavent, Georgia; O'Malley, Grace; Serrecchia, John B.; Tamborlane, William V.; Caprio, Sonia

    2011-01-01

    OBJECTIVE: To determine if beneficial effects of a weight-management program could be sustained for up to 24 months in a randomized trial in an ethnically diverse obese population. PATIENTS AND METHODS: There were 209 obese children (BMI > 95th percentile), ages 8 to 16 of mixed ethnic backgrounds randomly assigned to the intensive lifestyle intervention or clinic control group. The control group received counseling every 6 months, and the intervention group received a family-based program, which included exercise, nutrition, and behavior modification. Lifestyle intervention sessions occurred twice weekly for the first 6 months, then twice monthly for the second 6 months; for the last 12 months there was no active intervention. There were 174 children who completed the 12 months of the randomized trial. Follow-up data were available for 76 of these children at 24 months. There were no statistical differences in dropout rates among ethnic groups or in any other aspects. RESULTS: Treatment effect was sustained at 24 months in the intervention versus control group for BMI z score (−0.16 [95% confidence interval: −0.23 to −0.09]), BMI (−2.8 kg/m2 [95% confidence interval: −4.0–1.6 kg/m2]), percent body fat (−4.2% [95% confidence interval: −6.4% to −2.0%]), total body fat mass (−5.8 kg [95% confidence interval: −9.1 kg to −2.6 kg]), total cholesterol (−13.0 mg/dL [95% confidence interval: −21.7 mg/dL to −4.2 mg/dL]), low-density lipoprotein cholesterol (−10.4 mg/dL [95% confidence interval: −18.3 mg/dL to −2.4 mg/dL]), and homeostasis model assessment of insulin resistance (−2.05 [95% confidence interval: −2.48 to −1.75]). CONCLUSIONS: This study, unprecedented because of the high degree of obesity and ethnically diverse backgrounds of children, reveals that benefits of an intensive lifestyle program can be sustained 12 months after completing the active intervention phase. PMID:21300674

  14. Psychosocial and nonclinical factors predicting hospital utilization in patients of a chronic disease management program: a prospective observational study.

    PubMed

    Tran, Mark W; Weiland, Tracey J; Phillips, Georgina A

    2015-01-01

    Psychosocial factors such as marital status (odds ratio, 3.52; 95% confidence interval, 1.43-8.69; P = .006) and nonclinical factors such as outpatient nonattendances (odds ratio, 2.52; 95% confidence interval, 1.22-5.23; P = .013) and referrals made (odds ratio, 1.20; 95% confidence interval, 1.06-1.35; P = .003) predict hospital utilization for patients in a chronic disease management program. Along with optimizing patients' clinical condition by prescribed medical guidelines and supporting patient self-management, addressing psychosocial and nonclinical issues are important in attempting to avoid hospital utilization for people with chronic illnesses.

  15. Transmission potential of influenza A/H7N9, February to May 2013, China

    PubMed Central

    2013-01-01

    Background On 31 March 2013, the first human infections with the novel influenza A/H7N9 virus were reported in Eastern China. The outbreak expanded rapidly in geographic scope and size, with a total of 132 laboratory-confirmed cases reported by 3 June 2013, in 10 Chinese provinces and Taiwan. The incidence of A/H7N9 cases has stalled in recent weeks, presumably as a consequence of live bird market closures in the most heavily affected areas. Here we compare the transmission potential of influenza A/H7N9 with that of other emerging pathogens and evaluate the impact of intervention measures in an effort to guide pandemic preparedness. Methods We used a Bayesian approach combined with a SEIR (Susceptible-Exposed-Infectious-Removed) transmission model fitted to daily case data to assess the reproduction number (R) of A/H7N9 by province and to evaluate the impact of live bird market closures in April and May 2013. Simulation studies helped quantify the performance of our approach in the context of an emerging pathogen, where human-to-human transmission is limited and most cases arise from spillover events. We also used alternative approaches to estimate R based on individual-level information on prior exposure and compared the transmission potential of influenza A/H7N9 with that of other recent zoonoses. Results Estimates of R for the A/H7N9 outbreak were below the epidemic threshold required for sustained human-to-human transmission and remained near 0.1 throughout the study period, with broad 95% credible intervals by the Bayesian method (0.01 to 0.49). The Bayesian estimation approach was dominated by the prior distribution, however, due to relatively little information contained in the case data. We observe a statistically significant deceleration in growth rate after 6 April 2013, which is consistent with a reduction in A/H7N9 transmission associated with the preemptive closure of live bird markets. Although confidence intervals are broad, the estimated transmission potential of A/H7N9 appears lower than that of recent zoonotic threats, including avian influenza A/H5N1, swine influenza H3N2sw and Nipah virus. Conclusion Although uncertainty remains high in R estimates for H7N9 due to limited epidemiological information, all available evidence points to a low transmission potential. Continued monitoring of the transmission potential of A/H7N9 is critical in the coming months as intervention measures may be relaxed and seasonal factors could promote disease transmission in colder months. PMID:24083506

  16. Bayesian Statistical Analysis of Historical and Late Holocene Rates of Sea-Level Change

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    A fundamental concern associated with climate change is the rate at which sea levels are rising. Studies of past sea level (particularly beyond the instrumental data range) allow modern sea-level rise to be placed in a more complete context. Considering this, we perform a Bayesian statistical analysis on historical and late Holocene rates of sea-level change. The data that form the input to the statistical model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. The aims are to estimate rates of sea-level rise, to determine when modern rates of sea-level rise began and to observe how these rates have been changing over time. Many of the current methods for doing this use simple linear regression to estimate rates. This is often inappropriate as it is too rigid and it can ignore uncertainties that arise as part of the data collection exercise. This can lead to over confidence in the sea-level trends being characterized. The proposed Bayesian model places a Gaussian process prior on the rate process (i.e. the process that determines how rates of sea-level are changing over time). The likelihood of the observed data is the integral of this process. When dealing with proxy reconstructions, this is set in an errors-in-variables framework so as to take account of age uncertainty. It is also necessary, in this case, for the model to account for glacio-isostatic adjustment, which introduces a covariance between individual age and sea-level observations. This method provides a flexible fit and it allows for the direct estimation of the rate process with full consideration of all sources of uncertainty. Analysis of tide-gauge datasets and proxy reconstructions in this way means that changing rates of sea level can be estimated more comprehensively and accurately than previously possible. The model captures the continuous and dynamic evolution of sea-level change and results show that not only are modern sea levels rising but that the rates of rise are continuously increasing. Analysis of the a global tide-gauge record (Church and White, 2011) indicated that the rate of sea-level rise increased continuously since 1880AD and is currently 2.57mm/yr (95% credible interval of 1.71 to 4.35mm/yr). Application of the model a proxy reconstruction from North Carolina (Kemp et al., 2011) indicated that the mean rate of rise in this locality since the middle of the 19th century (current rate of 2.66 mm/yr with a 95% credible interval of 1.29 to 4.59mm/yr) is in agreement with results from the tide gauge analysis and is unprecedented in at least the last 2000 years.

  17. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…

  18. Persistent opioid use following Cesarean delivery: patterns and predictors among opioid naïve women

    PubMed Central

    Bateman, Brian T.; Franklin, Jessica M.; Bykov, Katsiaryna; Avorn, Jerry; Shrank, William H.; Brennan, Troyen A.; Landon, Joan E.; Rathmell, James P.; Huybrechts, Krista F.; Fischer, Michael A.; Choudhry, Niteesh K.

    2016-01-01

    Background The incidence of opioid-related death in women has increased five-fold over the past decade. For many women, their initial opioid exposure will occur in the setting of routine medical care. Approximately 1 in 3 deliveries in the U.S. is by Cesarean and opioids are commonly prescribed for post-surgical pain management. Objective The objective of this study was to determine the risk that opioid naïve women prescribed opioids after Cesarean delivery will subsequently become consistent prescription opioid users in the year following delivery, and to identify predictors for this behavior. Study Design We identified women in a database of commercial insurance beneficiaries who underwent Cesarean delivery and who were opioid-naïve in the year prior to delivery. To identify persistent users of opioids, we used trajectory models, which group together patients with similar patterns of medication filling during follow-up, based on patterns of opioid dispensing in the year following Cesarean delivery. We then constructed a multivariable logistic regression model to identify independent risk factors for membership in the persistent user group. Results 285 of 80,127 (0.36%, 95% confidence interval 0.32 to 0.40), opioid-naïve women became persistent opioid users (identified using trajectory models based on monthly patterns of opioid dispensing) following Cesarean delivery. Demographics and baseline comorbidity predicted such use with moderate discrimination (c statistic = 0.73). Significant predictors included a history of cocaine abuse (risk 7.41%; adjusted odds ratio 6.11, 95% confidence interval 1.03 to 36.31) and other illicit substance abuse (2.36%; adjusted odds ratio 2.78, 95% confidence interval 1.12 to 6.91), tobacco use (1.45%; adjusted odds ratio 3.04, 95% confidence interval 2.03 to 4.55), back pain (0.69%; adjusted odds ratio 1.74, 95% confidence interval 1.33 to 2.29), migraines (0.91%; adjusted odds ratio 2.14, 95% confidence interval 1.58 to 2.90), antidepressant use (1.34%; adjusted odds ratio 3.19, 95% confidence interval 2.41 to 4.23) and benzodiazepine use (1.99%; adjusted odds ratio 3.72, 95% confidence interval 2.64 to 5.26) in the year prior to Cesarean delivery. Conclusions A very small proportion of opioid-naïve women (approximately 1 in 300) become persistent prescription opioid users following Cesarean delivery. Pre-existing psychiatric comorbidity, certain pain conditions, and substance use/abuse conditions identifiable at the time of initial opioid prescribing were predictors of persistent use. PMID:26996986

  19. Emergency department patient satisfaction survey in Imam Reza Hospital, Tabriz, Iran

    PubMed Central

    2011-01-01

    Introduction Patient satisfaction is an important indicator of the quality of care and service delivery in the emergency department (ED). The objective of this study was to evaluate patient satisfaction with the Emergency Department of Imam Reza Hospital in Tabriz, Iran. Methods This study was carried out for 1 week during all shifts. Trained researchers used the standard Press Ganey questionnaire. Patients were asked to complete the questionnaire prior to discharge. The study questionnaire included 30 questions based on a Likert scale. Descriptive and analytical statistics were used throughout data analysis in a number of ways using SPSS version 13. Results Five hundred patients who attended our ED were included in this study. The highest satisfaction rates were observed in the terms of physicians' communication with patients (82.5%), security guards' courtesy (78.3%) and nurses' communication with patients (78%). The average waiting time for the first visit to a physician was 24 min 15 s. The overall satisfaction rate was dependent on the mean waiting time. The mean waiting time for a low rate of satisfaction was 47 min 11 s with a confidence interval of (19.31, 74.51), and for very good level of satisfaction it was 14 min 57 s with a (10.58, 18.57) confidence interval. Approximately 63% of the patients rated their general satisfaction with the emergency setting as good or very good. On the whole, the patient satisfaction rate at the lowest level was 7.7 with a confidence interval of (5.1, 10.4), and at the low level it was 5.8% with a confidence interval of (3.7, 7.9). The rate of satisfaction for the mediocre level was 23.3 with a confidence interval of (19.1, 27.5); for the high level of satisfaction it was 28.3 with a confidence interval of (22.9, 32.8), and for the very high level of satisfaction, this rate was 32.9% with a confidence interval of (28.4, 37.4). Conclusion The study findings indicated the need for evidence-based interventions in emergency care services in areas such as medical care, nursing care, courtesy of staff, physical comfort and waiting time. Efforts should focus on shortening waiting intervals and improving patients' perceptions about waiting in the ED, and also improving the overall cleanliness of the emergency room. PMID:21407998

  20. Application of multivariate probabilistic (Bayesian) networks to substance use disorder risk stratification and cost estimation.

    PubMed

    Weinstein, Lawrence; Radano, Todd A; Jack, Timothy; Kalina, Philip; Eberhardt, John S

    2009-09-16

    This paper explores the use of machine learning and Bayesian classification models to develop broadly applicable risk stratification models to guide disease management of health plan enrollees with substance use disorder (SUD). While the high costs and morbidities associated with SUD are understood by payers, who manage it through utilization review, acute interventions, coverage and cost limitations, and disease management, the literature shows mixed results for these modalities in improving patient outcomes and controlling cost. Our objective is to evaluate the potential of data mining methods to identify novel risk factors for chronic disease and stratification of enrollee utilization, which can be used to develop new methods for targeting disease management services to maximize benefits to both enrollees and payers. For our evaluation, we used DecisionQ machine learning algorithms to build Bayesian network models of a representative sample of data licensed from Thomson-Reuters' MarketScan consisting of 185,322 enrollees with three full-year claim records. Data sets were prepared, and a stepwise learning process was used to train a series of Bayesian belief networks (BBNs). The BBNs were validated using a 10 percent holdout set. The networks were highly predictive, with the risk-stratification BBNs producing area under the curve (AUC) for SUD positive of 0.948 (95 percent confidence interval [CI], 0.944-0.951) and 0.736 (95 percent CI, 0.721-0.752), respectively, and SUD negative of 0.951 (95 percent CI, 0.947-0.954) and 0.738 (95 percent CI, 0.727-0.750), respectively. The cost estimation models produced area under the curve ranging from 0.72 (95 percent CI, 0.708-0.731) to 0.961 (95 percent CI, 0.95-0.971). We were able to successfully model a large, heterogeneous population of commercial enrollees, applying state-of-the-art machine learning technology to develop complex and accurate multivariate models that support near-real-time scoring of novel payer populations based on historic claims and diagnostic data. Initial validation results indicate that we can stratify enrollees with SUD diagnoses into different cost categories with a high degree of sensitivity and specificity, and the most challenging issue becomes one of policy. Due to the social stigma associated with the disease and ethical issues pertaining to access to care and individual versus societal benefit, a thoughtful dialogue needs to occur about the appropriate way to implement these technologies.

  1. Comparison of WBRT alone, SRS alone, and their combination in the treatment of one or more brain metastases: Review and meta-analysis.

    PubMed

    Khan, Muhammad; Lin, Jie; Liao, Guixiang; Li, Rong; Wang, Baiyao; Xie, Guozhu; Zheng, Jieling; Yuan, Yawei

    2017-07-01

    Whole brain radiotherapy has been a standard treatment of brain metastases. Stereotactic radiosurgery provides more focal and aggressive radiation and normal tissue sparing but worse local and distant control. This meta-analysis was performed to assess and compare the effectiveness of whole brain radiotherapy alone, stereotactic radiosurgery alone, and their combination in the treatment of brain metastases based on randomized controlled trial studies. Electronic databases (PubMed, MEDLINE, Embase, and Cochrane Library) were searched to identify randomized controlled trial studies that compared treatment outcome of whole brain radiotherapy and stereotactic radiosurgery. This meta-analysis was performed using the Review Manager (RevMan) software (version 5.2) that is provided by the Cochrane Collaboration. The data used were hazard ratios with 95% confidence intervals calculated for time-to-event data extracted from survival curves and local tumor control rate curves. Odds ratio with 95% confidence intervals were calculated for dichotomous data, while mean differences with 95% confidence intervals were calculated for continuous data. Fixed-effects or random-effects models were adopted according to heterogeneity. Five studies (n = 763) were included in this meta-analysis meeting the inclusion criteria. All the included studies were randomized controlled trials. The sample size ranged from 27 to 331. In total 202 (26%) patients with whole brain radiotherapy alone, 196 (26%) patients receiving stereotactic radiosurgery alone, and 365 (48%) patients were in whole brain radiotherapy plus stereotactic radiosurgery group. No significant survival benefit was observed for any treatment approach; hazard ratio was 1.19 (95% confidence interval: 0.96-1.43, p = 0.12) based on three randomized controlled trials for whole brain radiotherapy only compared to whole brain radiotherapy plus stereotactic radiosurgery and hazard ratio was 1.03 (95% confidence interval: 0.82-1.29, p = 0.81) for stereotactic radiosurgery only compared to combined approach. Local control was best achieved when whole brain radiotherapy was combined with stereotactic radiosurgery. Hazard ratio 2.05 (95% confidence interval: 1.36-3.09, p = 0.0006) and hazard ratio 1.84 (95% confidence interval: 1.26-2.70, p = 0.002) were obtained from comparing whole brain radiotherapy only and stereotactic radiosurgery only to whole brain radiotherapy + stereotactic radiosurgery, respectively. No difference in adverse events for treatment difference; odds ratio 1.16 (95% confidence interval: 0.77-1.76, p = 0.48) and odds ratio 0.92 (95% confidence interval: 0.59-1.42, p = 71) for whole brain radiotherapy + stereotactic radiosurgery versus whole brain radiotherapy only and whole brain radiotherapy + stereotactic radiosurgery versus stereotactic radiosurgery only, respectively. Adding stereotactic radiosurgery to whole brain radiotherapy provides better local control as compared to whole brain radiotherapy only and stereotactic radiosurgery only with no difference in radiation related toxicities.

  2. Maternal and neonatal outcomes after bariatric surgery; a systematic review and meta-analysis: do the benefits outweigh the risks?

    PubMed

    Kwong, Wilson; Tomlinson, George; Feig, Denice S

    2018-02-15

    Obesity during pregnancy is associated with a number of adverse obstetric outcomes that include gestational diabetes mellitus, macrosomia, and preeclampsia. Increasing evidence shows that bariatric surgery may decrease the risk of these outcomes. Our aim was to evaluate the benefits and risks of bariatric surgery in obese women according to obstetric outcomes. We performed a systematic literature search using MEDLINE, Embase, Cochrane, Web of Science, and PubMed from inception up to December 12, 2016. Studies were included if they evaluated patients who underwent bariatric surgery, reported subsequent pregnancy outcomes, and compared these outcomes with a control group. Two reviewers extracted study outcomes independently, and risk of bias was assessed with the use of the Newcastle-Ottawa Quality Assessment Scale. Pooled odds ratios for each outcome were estimated with the Dersimonian and Laird random effects model. After a review of 2616 abstracts, 20 cohort studies and approximately 2.8 million subjects (8364 of whom had bariatric surgery) were included in the metaanalysis. In our primary analysis, patients who underwent bariatric surgery showed reduced rates of gestational diabetes mellitus (odds ratio, 0.20; 95% confidence interval, 0.11-0.37, number needed to benefit, 5), large-for-gestational-age infants (odds ratio, 0.31; 95% confidence interval, 0.17-0.59; number needed to benefit, 6), gestational hypertension (odds ratio, 0.38; 95% confidence interval, 0.19-0.76; number needed to benefit, 11), all hypertensive disorders (odds ratio, 0.38; 95% confidence interval, 0.27-0.53; number needed to benefit, 8), postpartum hemorrhage (odds ratio, 0.32; 95% confidence interval, 0.08-1.37; number needed to benefit, 21), and caesarean delivery rates (odds ratio, 0.50; 95% confidence interval, 0.38-0.67; number needed to benefit, 9); however, group of patients showed an increase in small-for-gestational-age infants (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 21), intrauterine growth restriction (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 66), and preterm deliveries (odds ratio, 1.35; 95% confidence interval, 1.02-1.79; number needed to harm, 35) when compared with control subjects who were matched for presurgery body mass index. There were no differences in rates of preeclampsia, neonatal intensive care unit admissions, stillbirths, malformations, and neonatal death. Malabsorptive surgeries resulted in a greater increase in small-for-gestational-age infants (P=.0466) and a greater decrease in large-for-gestational-age infants (P=<.0001) compared with restrictive surgeries. There were no differences in outcomes when we used administrative databases vs clinical charts. Although bariatric surgery is associated with a reduction in the risk of several adverse obstetric outcomes, there is a potential for an increased risk of other important outcomes that should be considered when bariatric surgery is discussed with reproductive-age women. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  4. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  6. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  7. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  8. Women seeking treatment for advanced pelvic organ prolapse have decreased body image and quality of life.

    PubMed

    Jelovsek, J Eric; Barber, Matthew D

    2006-05-01

    Women who seek treatment for pelvic organ prolapse strive for an improvement in quality of life. Body image has been shown to be an important component of differences in quality of life. To date, there are no data on body image in patients with advanced pelvic organ prolapse. Our objective was to compare body image and quality of life in women with advanced pelvic organ prolapse with normal controls. We used a case-control study design. Cases were defined as subjects who presented to a tertiary urogynecology clinic with advanced pelvic organ prolapse (stage 3 or 4). Controls were defined as subjects who presented to a tertiary care gynecology or women's health clinic for an annual visit with normal pelvic floor support (stage 0 or 1) and without urinary incontinence. All patients completed a valid and reliable body image scale and a generalized (Short Form Health Survey) and condition-specific (Pelvic Floor Distress Inventory-20) quality-of-life scale. Linear and logistic regression analyses were performed to adjust for possible confounding variables. Forty-seven case and 51 control subjects were enrolled. After controlling for age, race, parity, previous hysterectomy, and medical comorbidities, subjects with advanced pelvic organ prolapse were more likely to feel self-conscious (adjusted odds ratio 4.7; 95% confidence interval 1.4 to 18, P = .02), less likely to feel physically attractive (adjusted odds ratio 11; 95% confidence interval 2.9 to 51, P < .001), less likely to feel feminine (adjusted odds ratio 4.0; 95% confidence interval 1.2 to 15, P = .03), and less likely to feel sexually attractive (adjusted odds ratio 4.6; 95% confidence interval 1.4 to 17, P = .02) than normal controls. The groups were similar in their feeling of dissatisfaction with appearance when dressed, difficulty looking at themselves naked, avoiding people because of appearance, and overall dissatisfaction with their body. Subjects with advanced pelvic organ prolapse suffered significantly lower quality of life on the physical scale of the SF-12 (mean 42; 95% confidence interval 39 to 45 versus mean 50; 95% confidence interval 47 to 53, P < .009). However, no differences between groups were noted on the mental scale of the SF-12 (mean 51; 95% confidence interval 50 to 54 versus mean 50; 95% confidence interval 47 to 52, P = .56). Additionally, subjects with advanced pelvic organ prolapse scored significantly worse on the prolapse, urinary, and colorectal scales and overall summary score of Pelvic Floor Distress Inventory-20 than normal controls (mean summary score 104; 95% confidence interval 90 to 118 versus mean 29; 95% confidence interval 16 to 43, P < .0001), indicating a decrease in condition-specific quality of life. Worsening body image correlated with lower quality of life on both the physical and mental scales of the SF-12 as well as the prolapse, urinary, and colorectal scales and overall summary score of Pelvic Floor Distress Inventory-20 in subjects with advanced pelvic organ prolapse. Women seeking treatment for advanced pelvic organ prolapse have decreased body image and overall quality of life. Body image may be a key determinant for quality of life in patients with advanced prolapse and may be an important outcome measure for treatment evaluation in clinical trials.

  9. Exercise during pregnancy in normal-weight women and risk of preterm birth: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Di Mascio, Daniele; Magro-Malosso, Elena Rita; Saccone, Gabriele; Marhefka, Gregary D; Berghella, Vincenzo

    2016-11-01

    Preterm birth is the major cause of perinatal mortality in the United States. In the past, pregnant women have been recommended to not exercise because of presumed risks of preterm birth. Physical activity has been theoretically related to preterm birth because it increases the release of catecholamines, especially norepinephrine, which might stimulate myometrial activity. Conversely, exercise may reduce the risk of preterm birth by other mechanisms such as decreased oxidative stress or improved placenta vascularization. Therefore, the safety of exercise regarding preterm birth and its effects on gestational age at delivery remain controversial. The objective of the study was to evaluate the effects of exercise during pregnancy on the risk of preterm birth. MEDLINE, EMBASE, Web of Sciences, Scopus, ClinicalTrial.gov, OVID, and Cochrane Library were searched from the inception of each database to April 2016. Selection criteria included only randomized clinical trials of pregnant women randomized before 23 weeks to an aerobic exercise regimen or not. Types of participants included women of normal weight with uncomplicated, singleton pregnancies without any obstetric contraindication to physical activity. The summary measures were reported as relative risk or as mean difference with 95% confidence intervals. The primary outcome was the incidence of preterm birth <37 weeks. Of the 2059 women included in the meta-analysis, 1022 (49.6%) were randomized to the exercise group and 1037 (50.4%) to the control group. Aerobic exercise lasted about 35-90 minutes 3-4 times per week. Women who were randomized to aerobic exercise had a similar incidence of preterm birth of <37 weeks (4.5% vs 4.4%; relative risk, 1.01, 95% confidence interval, 0.68-1.50) and a similar mean gestational age at delivery (mean difference, 0.05 week, 95% confidence interval, -0.07 to 0.17) compared with controls. Women in the exercise group had a significantly higher incidence of vaginal delivery (73.6% vs 67.5%; relative risk, 1.09, 95% confidence interval, 1.04-1.15) and a significantly lower incidence of cesarean delivery (17.9% vs 22%; relative risk, 0.82, 95% confidence interval, 0.69-0.97) compared with controls. The incidence of operative vaginal delivery (12.9% vs 16.5%; relative risk, 0.78, 95% confidence interval, 0.61-1.01) was similar in both groups. Women in the exercise group had a significantly lower incidence of gestational diabetes mellitus (2.9% vs 5.6%; relative risk, 0.51, 95% confidence interval, 0.31-0.82) and a significantly lower incidence of hypertensive disorders (1.0% vs 5.6%; relative risk, 0.21, 95% confidence interval, 0.09-0.45) compared with controls. No differences in low birthweight (5.2% vs 4.7%; relative risk, 1.11, 95% confidence interval, 0.72-1.73) and mean birthweight (mean difference, -10.46 g, 95% confidence interval, -47.10 to 26.21) between the exercise group and controls were found. Aerobic exercise for 35-90 minutes 3-4 times per week during pregnancy can be safely performed by normal-weight women with singleton, uncomplicated gestations because this is not associated with an increased risk of preterm birth or with a reduction in mean gestational age at delivery. Exercise was associated with a significantly higher incidence of vaginal delivery and a significantly lower incidence of cesarean delivery, with a significantly lower incidence of gestational diabetes mellitus and hypertensive disorders and therefore should be encouraged. Copyright © 2016. Published by Elsevier Inc.

  10. Factors Associated With Bites to a Child From a Dog Living in the Same Home: A Bi-National Comparison

    PubMed Central

    Messam, Locksley L. McV.; Kass, Philip H.; Chomel, Bruno B.; Hart, Lynette A.

    2018-01-01

    We conducted a veterinary clinic-based retrospective cohort study aimed at identifying child-, dog-, and home-environment factors associated with dog bites to children aged 5–15 years old living in the same home as a dog in Kingston, Jamaica (236) and San Francisco, USA (61). Secondarily, we wished to compare these factors to risk factors for dog bites to the general public. Participant information was collected via interviewer-administered questionnaire using proxy respondents. Data were analyzed using log-binomial regression to estimate relative risks and associated 95% confidence intervals (CIs) for each exposure–dog bite relationship. Exploiting the correspondence between X% confidence intervals and X% Bayesian probability intervals obtained using a uniform prior distribution, for each exposure, we calculated probabilities of the true (population) RRs ≥ 1.25 or ≤0.8, for positive or negative associations, respectively. Boys and younger children were at higher risk for bites, than girls and older children, respectively. Dogs living in a home with no yard space were at an elevated risk (RR = 2.97; 95% CI: 1.06–8.33) of biting a child living in the same home, compared to dogs that had yard space. Dogs routinely allowed inside for some portion of the day (RR = 3.00; 95% CI: 0.94–9.62) and dogs routinely allowed to sleep in a family member’s bedroom (RR = 2.82; 95% CI: 1.17–6.81) were also more likely to bite a child living in the home than those that were not. In San Francisco, but less so in Kingston, bites were inversely associated with the number of children in the home. While in Kingston, but not in San Francisco, smaller breeds and dogs obtained for companionship were at higher risk for biting than larger breeds and dogs obtained for protection, respectively. Overall, for most exposures, the observed associations were consistent with population RRs of practical importance (i.e., RRs ≥ 1.25 or ≤0.8). Finally, we found substantial consistency between risk factors for bites to children and previously reported risk factors for general bites. PMID:29780810

  11. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  12. Bayesian Optimal Interval Design: A Simple and Well-Performing Design for Phase I Oncology Trials

    PubMed Central

    Yuan, Ying; Hess, Kenneth R.; Hilsenbeck, Susan G.; Gilbert, Mark R.

    2016-01-01

    Despite more than two decades of publications that offer more innovative model-based designs, the classical 3+3 design remains the most dominant phase I trial design in practice. In this article, we introduce a new trial design, the Bayesian optimal interval (BOIN) design. The BOIN design is easy to implement in a way similar to the 3+3 design, but is more flexible for choosing the target toxicity rate and cohort size and yields a substantially better performance that is comparable to that of more complex model-based designs. The BOIN design contains the 3+3 design and the accelerated titration design as special cases, thus linking it to established phase I approaches. A numerical study shows that the BOIN design generally outperforms the 3+3 design and the modified toxicity probability interval (mTPI) design. The BOIN design is more likely than the 3+3 design to correctly select the maximum tolerated dose (MTD) and allocate more patients to the MTD. Compared to the mTPI design, the BOIN design has a substantially lower risk of overdosing patients and generally a higher probability of correctly selecting the MTD. User-friendly software is freely available to facilitate the application of the BOIN design. PMID:27407096

  13. Bayes Factor Approaches for Testing Interval Null Hypotheses

    ERIC Educational Resources Information Center

    Morey, Richard D.; Rouder, Jeffrey N.

    2011-01-01

    Psychological theories are statements of constraint. The role of hypothesis testing in psychology is to test whether specific theoretical constraints hold in data. Bayesian statistics is well suited to the task of finding supporting evidence for constraint, because it allows for comparing evidence for 2 hypotheses against each another. One issue…

  14. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    PubMed

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  15. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  16. Selected aspects of prior and likelihood information for a Bayesian classifier in a road safety analysis.

    PubMed

    Nowakowska, Marzena

    2017-04-01

    The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  18. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    PubMed

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  19. A Pragmatic Bayesian Perspective on Correlation Analysis. The exoplanetary gravity - stellar activity case

    NASA Astrophysics Data System (ADS)

    Figueira, P.; Faria, J. P.; Adibekyan, V. Zh.; Oshagh, M.; Santos, N. C.

    2016-11-01

    We apply the Bayesian framework to assess the presence of a correlation between two quantities. To do so, we estimate the probability distribution of the parameter of interest, ρ, characterizing the strength of the correlation. We provide an implementation of these ideas and concepts using python programming language and the pyMC module in a very short (˜ 130 lines of code, heavily commented) and user-friendly program. We used this tool to assess the presence and properties of the correlation between planetary surface gravity and stellar activity level as measured by the log(R^' }_{ {HK}}) indicator. The results of the Bayesian analysis are qualitatively similar to those obtained via p-value analysis, and support the presence of a correlation in the data. The results are more robust in their derivation and more informative, revealing interesting features such as asymmetric posterior distributions or markedly different credible intervals, and allowing for a deeper exploration. We encourage the reader interested in this kind of problem to apply our code to his/her own scientific problems. The full understanding of what the Bayesian framework is can only be gained through the insight that comes by handling priors, assessing the convergence of Monte Carlo runs, and a multitude of other practical problems. We hope to contribute so that Bayesian analysis becomes a tool in the toolkit of researchers, and they understand by experience its advantages and limitations.

  20. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  1. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  2. Characterizing the Mathematics Anxiety Literature Using Confidence Intervals as a Literature Review Mechanism

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce

    2010-01-01

    The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…

  3. Statistical inference for remote sensing-based estimates of net deforestation

    Treesearch

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  4. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  5. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  6. The impact of effort-reward imbalance on quality of life among Japanese working men.

    PubMed

    Watanabe, Mayumi; Tanaka, Katsutoshi; Aratake, Yutaka; Kato, Noritada; Sakata, Yumi

    2008-07-01

    Health-related quality of life (HRQL) is an important measure of health outcome in working and healthy populations. Here, we investigated the impact of effort-reward imbalance (ERI), a representative work-stress model, on HRQL of Japanese working men. The study targeted 1,096 employees from a manufacturing plant in Japan. To assess HRQL and ERI, participants were surveyed using the Japanese version of the Short-Form 8 Health Survey (SF-8) and effort-reward imbalance model. Of the 1,096 employees, 1,057 provided valid responses to the questionnaire. For physical summary scores, the adjusted effort-reward imbalance odds ratios of middle vs. bottom and top vs. bottom tertiles were 0.24 (95% confidence interval, 0.08-0.70) and 0.09 (95% confidence interval, 0.03-0.28), respectively. For mental summary scores, ratios were 0.21 (95% confidence interval, 0.07-0.63) and 0.07 (95% confidence interval, 0.02-0.25), respectively. These findings demonstrate that effort-reward imbalance is independently associated with HRQL among Japanese employees.

  7. Simultaneous Force Regression and Movement Classification of Fingers via Surface EMG within a Unified Bayesian Framework.

    PubMed

    Baldacchino, Tara; Jacobs, William R; Anderson, Sean R; Worden, Keith; Rowson, Jennifer

    2018-01-01

    This contribution presents a novel methodology for myolectric-based control using surface electromyographic (sEMG) signals recorded during finger movements. A multivariate Bayesian mixture of experts (MoE) model is introduced which provides a powerful method for modeling force regression at the fingertips, while also performing finger movement classification as a by-product of the modeling algorithm. Bayesian inference of the model allows uncertainties to be naturally incorporated into the model structure. This method is tested using data from the publicly released NinaPro database which consists of sEMG recordings for 6 degree-of-freedom force activations for 40 intact subjects. The results demonstrate that the MoE model achieves similar performance compared to the benchmark set by the authors of NinaPro for finger force regression. Additionally, inherent to the Bayesian framework is the inclusion of uncertainty in the model parameters, naturally providing confidence bounds on the force regression predictions. Furthermore, the integrated clustering step allows a detailed investigation into classification of the finger movements, without incurring any extra computational effort. Subsequently, a systematic approach to assessing the importance of the number of electrodes needed for accurate control is performed via sensitivity analysis techniques. A slight degradation in regression performance is observed for a reduced number of electrodes, while classification performance is unaffected.

  8. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  9. Sex hormones and the risk of type 2 diabetes mellitus: A 9-year follow up among elderly men in Finland.

    PubMed

    Salminen, Marika; Vahlberg, Tero; Räihä, Ismo; Niskanen, Leo; Kivelä, Sirkka-Liisa; Irjala, Kerttu

    2015-05-01

    To analyze whether sex hormone levels predict the incidence of type2 diabetes among elderly Finnish men. This was a prospective population-based study, with a 9-year follow up period. The study population in the municipality of Lieto, Finland, consisted of elderly (age ≥64 years) men free of type 2 diabetes at baseline in 1998-1999 (n = 430). Body mass index and cardiovascular disease-adjusted hazard ratios and their 95% confidence intervals for type 2 diabetes predicted by testosterone, free testosterone, sex hormone-binding globulin, luteinizing hormone, and testosterone/luteinizing hormone were estimated. A total of 30 new cases of type 2 diabetes developed during the follow-up period. After adjustment, only higher levels of testosterone (hazard ratio for one-unit increase 0.93, 95% confidence interval 0.87-0.99, P = 0.020) and free testosterone (hazard ratio for 10-unit increase 0.96, 95% confidence interval 0.91-1.00, P = 0.044) were associated with a lower risk of incident type 2 diabetes during the follow up. These associations (0.94, 95% confidence interval 0.87-1.00, P = 0.050 and 0.95, 95% confidence interval 0.90-1.00, P = 0.035, respectively) persisted even after additional adjustment of sex hormone-binding globulin. Higher levels of testosterone and free testosterone independently predicted a reduced risk of type 2 diabetes in the elderly men. © 2014 Japan Geriatrics Society.

  10. N-acetyltransferase 2 gene polymorphism as a biomarker for susceptibility to bladder cancer in Bangladeshi population.

    PubMed

    Hosen, Md Bayejid; Islam, Jahidul; Salam, Md Abdus; Islam, Md Fakhrul; Hawlader, M Zakir Hossain; Kabir, Yearul

    2015-03-01

    To investigate the association between the three most common single nucleotide polymorphisms of the N-acetyltransferase 2 gene together with cigarette smoking and the risk of developing bladder cancer and its aggressiveness. A case-control study on 102 bladder cancer patients and 140 control subjects was conducted. The genomic DNA was extracted from peripheral white blood cells and N-acetyltransferase 2 alleles were differentiated by polymerase chain reaction-based restriction fragment length polymorphism methods. Bladder cancer risk was estimated as odds ratio and 95% confidence interval using binary logistic regression models adjusting for age and gender. Overall, N-acetyltransferase 2 slow genotypes were associated with bladder cancer risk (odds ratio=4.45; 95% confidence interval=2.26-8.77). The cigarette smokers with slow genotypes were found to have a sixfold increased risk to develop bladder cancer (odds ratio=6.05; 95% confidence interval=2.23-15.82). Patients with slow acetylating genotypes were more prone to develop high-grade (odds ratio=6.63; 95% confidence interval=1.15-38.13; P<0.05) and invasive (odds ratio=10.6; 95% confidence interval=1.00-111.5; P=0.05) tumor. N-acetyltransferase 2 slow genotype together with tobacco smoking increases bladder cancer risk. Patients with N-acetyltransferase 2 slow genotypes were more likely to develop a high-grade and invasive tumor. N-acetyltransferase 2 slow genotype is an important genetic determinant for bladder cancer in Bangladesh population. © 2014 Wiley Publishing Asia Pty Ltd.

  11. Sodium-glucose cotransporter 2 (SGLT2) inhibitors and fracture risk in patients with type 2 diabetes mellitus: A meta-analysis.

    PubMed

    Ruanpeng, Darin; Ungprasert, Patompong; Sangtian, Jutarat; Harindhanavudhi, Tasma

    2017-09-01

    Sodium-glucose cotransporter 2 (SGLT2) inhibitors could potentially alter calcium and phosphate homeostasis and may increase the risk of bone fracture. The current meta-analysis was conducted to investigate the fracture risk among patients with type 2 diabetes mellitus treated with SGLT2 inhibitors. Randomized controlled trials that compared the efficacy of SGLT2 inhibitors to placebo were identified. The risk ratios of fracture among patients who received SGLT2 inhibitors versus placebo were extracted from each study. Pooled risk ratios and 95% confidence intervals were calculated using a random-effect, Mantel-Haenszel analysis. A total of 20 studies with 8286 patients treated with SGLT2 inhibitors were included. The pooled risk ratio of bone fracture in patients receiving SGLT2 inhibitors versus placebo was 0.67 (95% confidence interval, 0.42-1.07). The pooled risk ratio for canagliflozin, dapagliflozin, and empagliflozin was 0.66 (95% confidence interval, 0.37-1.19), 0.84 (95% confidence interval, 0.22-3.18), and 0.57 (95% confidence interval, 0.20-1.59), respectively. Increased risk of bone fracture among patients with type 2 diabetes mellitus treated with SGLT2 inhibitors compared with placebo was not observed in this meta-analysis. However, the results were limited by short duration of treatment/follow-up and low incidence of the event of interest. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Influence of gender role attitudes on smoking and drinking among girls from Jujuy, Argentina.

    PubMed

    Mejia, Raul; Kaplan, Celia P; Alderete, Ethel; Gregorich, Steven E; Pérez-Stable, Eliseo J

    2013-09-01

    Evaluate effect of gender role attitudes on tobacco and alcohol use among Argentinean girls. Cross-sectional survey of 10th grade students attending 27 randomly selected schools in Jujuy, Argentina. Questions about tobacco and alcohol use were adapted from global youth surveys. Five items with 5-point response options of agreement-disagreement assessed attitude towards egalitarian (higher score) gender roles. 2133 girls, aged 13-18 years, 71% Indigenous, 22% mixed Indigenous/European, and 7% European responded. Of these, 60% had ever smoked, 32% were current smokers, 58% ever drinkers, 27% drank in previous month, and 13% had ≥5 drinks on one occasion. Mean response to the gender role scale was 3.49 (95% Confidence Intervals = 3.41-3.57) out of 5 tending toward egalitarian attitudes. Logistic regression models using the gender role scale score as the main predictor and adjusting for demographic and social confounders showed that egalitarian gender role was associated with ever smoking (Odds Ratio = 1.25; 95% Confidence Intervals 1.09-1.44), ever drinking (Odds Ratio = 1.24; 95% Confidence Intervals 1.10-1.40), drinking in prior month (Odds Ratio = 1.21; 95% Confidence Intervals 1.07-1.37) and ≥5 drinks on one occasion (Odds Ratio = 1.15; 95% Confidence Intervals 1.00-1.33), but was not significant for current smoking. Girls in Jujuy who reported more egalitarian gender role attitudes had higher odds of smoking or drinking. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Hostility and the risk of peptic ulcer in the GAZEL cohort.

    PubMed

    Lemogne, Cédric; Schuster, Jean-Pierre; Levenstein, Susan; Melchior, Maria; Nabi, Hermann; Ducimetière, Pierre; Limosin, Frédéric; Goldberg, Marcel; Zins, Marie; Consoli, Silla M

    2015-02-01

    Evidence for an association between hostility and peptic ulcer mainly relies on cross-sectional studies. Prospective studies are rare and have not used a validated measure of hostility. This prospective study aimed to examine the association between hostility and peptic ulcer in the large-scale French GAZEL cohort. In 1993, 14,674 participants completed the Buss and Durkee Hostility Inventory. Participants were annually followed-up from 1994 to 2011. Diagnosis of peptic ulcer was self-reported. The association between hostility scores and ulcer incidence was measured by hazard ratios (HR) and 95% confidence intervals computed through Cox regression. Among 13,539 participants free of peptic ulcer history at baseline, 816 reported a peptic ulcer during a mean follow-up of 16.8 years. Adjusting for potential confounders, including smoking, occupational grade, and a proxy for nonsteroidal anti-inflammatory drug exposure, ulcer incidence was positively associated with total hostility (HR per SD: 1.23, confidence interval: 1.14-1.31), behavioral hostility (HR per SD: 1.13, confidence interval: 1.05-1.21), cognitive hostility (HR per SD: 1.26, confidence interval: 1.18-1.35), and irritability (HR per SD: 1.20, confidence interval: 1.12-1.29). The risk of peptic ulcer increased from the lowest to the highest quartile for all hostility measures (p for linear trend < .05). Hostility might be associated with an increased risk of peptic ulcer. Should these results be replicated, further studies would be needed to explore the underlying mechanisms.

  14. Evolutionary history of Stratiomyidae (Insecta: Diptera): the molecular phylogeny of a diverse family of flies.

    PubMed

    Brammer, Colin A; von Dohlen, Carol D

    2007-05-01

    Stratiomyidae is a cosmopolitan family of Brachycera (Diptera) that contains over 2800 species. This study focused on the relationships of members of the subfamily Clitellariinae, which has had a complicated taxonomic history. To investigate the monophyly of the Clitellariinae, the relationships of its genera, and the ages of Stratiomyidae lineages, representatives for all 12 subfamilies of Stratiomyidae, totaling 68 taxa, were included in a phylogenetic reconstruction. A Xylomyidae representative, Solva sp., was used as an outgroup. Sequences of EF-1alpha and 28S rRNA genes were analyzed under maximum parsimony with bootstrapping, and Bayesian methods to recover the best estimate of phylogeny. A chronogram with estimated dates for all nodes in the phylogeny was generated with the program, r8s, and divergence dates and confidence intervals were further explored with the program, multidivtime. All subfamilies of Stratiomyidae with more than one representative were found to be monophyletic, except for Stratiomyinae and Clitellariinae. Clitellariinae were distributed among five separate clades in the phylogeny, and Raphiocerinae were nested within Stratiomyinae. Dating analysis suggested an early Cretaceous origin for the common ancestor of extant Stratiomyidae, and a radiation of several major Stratiomyidae lineages in the Late Cretaceous.

  15. Albumin treatment regimen for type 1 hepatorenal syndrome: a dose-response meta-analysis.

    PubMed

    Salerno, Francesco; Navickis, Roberta J; Wilkes, Mahlon M

    2015-11-25

    Recommended treatment for type 1 hepatorenal syndrome consists of albumin and vasoconstrictor. The optimal albumin dose remains poorly characterized. This meta-analysis aimed to determine the impact of albumin dose on treatment outcomes. Clinical studies of type 1 hepatorenal syndrome treatment with albumin and vasoconstrictor were sought. Search terms included: hepatorenal syndrome; albumin; vasoconstrictor; terlipressin; midodrine; octreotide; noradrenaline; and norepinephrine. A meta-analysis was performed of hepatorenal syndrome reversal and survival in relation to albumin dose. Nineteen clinical studies with 574 total patients were included, comprising 8 randomized controlled trials, 8 prospective studies and 3 retrospective studies. The pooled percentage of patients achieving hepatorenal syndrome reversal was 49.5% (95% confidence interval, 40.0-59.1%). Increments of 100 g in cumulative albumin dose were accompanied by significantly increased survival (hazard ratio, 1.15; 95% confidence interval, 1.02-1.31; p = 0.023). A non-significant increase of similar magnitude in hepatorenal syndrome reversal was also observed (odds ratio, 1.15; 95% confidence interval, 0.97-1.37; p = 0.10). Expected survival rates at 30 days among patients receiving cumulative albumin doses of 200, 400 and 600 g were 43.2% (95% confidence interval, 36.4-51.3%), 51.4% (95% confidence interval, 46.3-57.1%) and 59.0% (95% confidence interval, 51.9-67.2), respectively. Neither survival nor hepatorenal syndrome reversal was significantly affected by vasoconstrictor dose or type, treatment duration, age, baseline serum creatinine, bilirubin or albumin, baseline mean arterial pressure, or study design, size or time period. This meta-analysis suggests a dose-response relationship between infused albumin and survival in patients with type 1 hepatorenal syndrome. The meta-analysis provides the best current evidence on the potential role of albumin dose selection in improving outcomes of treatment for type 1 HRS and furnishes guidance for the design of future dose-ranging studies.

  16. Prevalence of orofacial clefts and risks for nonsyndromic cleft lip with or without cleft palate in newborns at a university hospital from West Mexico.

    PubMed

    Corona-Rivera, Jorge Román; Bobadilla-Morales, Lucina; Corona-Rivera, Alfredo; Peña-Padilla, Christian; Olvera-Molina, Sandra; Orozco-Martín, Miriam A; García-Cruz, Diana; Ríos-Flores, Izabel M; Gómez-Rodríguez, Brian Gabriel; Rivas-Soto, Gemma; Pérez-Molina, J Jesús

    2018-02-19

    We determined the overall prevalence of typical orofacial clefts and the potential risks for nonsyndromic cleft lip with or without cleft palate in a university hospital from West México. For the prevalence, 227 liveborn infants with typical orofacial clefts were included from a total of 81,193 births occurred during the period 2009-2016 at the "Dr. Juan I. Menchaca" Civil Hospital of Guadalajara (Guadalajara, Jalisco, Mexico). To evaluate potential risks, a case-control study was conducted among 420 newborns, including only those 105 patients with nonsyndromic cleft lip with or without cleft palate (cases), and 315 infants without birth defects (controls). Data were analyzed using multivariable logistic regression analysis expressed as adjusted odds ratio with 95% confidence intervals . The overall prevalence for typical orofacial clefts was 28 per 10,000 (95% confidence interval: 24.3-31.6), or 1 per 358 live births. The mean values for the prepregnancy weight, antepartum weight, and pre-pregnancy body mass index were statistically higher among the mothers of cases. Infants with nonsyndromic cleft lip with or without cleft palate had a significantly higher risk for previous history of any type of congenital anomaly (adjusted odds ratio: 2.7; 95% confidence interval: 1.4-5.1), history of a relative with cleft lip with or without cleft palate (adjusted odds ratio: 19.6; 95% confidence interval: 8.2-47.1), and first-trimester exposures to progestogens (adjusted odds ratio: 6.8; 95% CI 1.8-25.3), hyperthermia (adjusted odds ratio: 3.4; 95% confidence interval: 1.1-10.6), and common cold (adjusted odds ratio: 3.6; 95% confidence interval: 1.1-11.9). These risks could have contributed to explain the high prevalence of orofacial clefts in our region of Mexico, emphasizing that except for history of relatives with cleft lip with or without cleft palate, most are susceptible of modification. © 2018 Japanese Teratology Society.

  17. Fixed ratio combinations of glucagon like peptide 1 receptor agonists with basal insulin: a systematic review and meta-analysis.

    PubMed

    Liakopoulou, Paraskevi; Liakos, Aris; Vasilakou, Despoina; Athanasiadou, Eleni; Bekiari, Eleni; Kazakos, Kyriakos; Tsapas, Apostolos

    2017-06-01

    Basal insulin controls primarily fasting plasma glucose but causes hypoglycaemia and weight gain, whilst glucagon like peptide 1 receptor agonists induce weight loss without increasing risk for hypoglycaemia. We conducted a systematic review and meta-analysis of randomised controlled trials to investigate the efficacy and safety of fixed ratio combinations of basal insulin with glucagon like peptide 1 receptor agonists. We searched Medline, Embase, and the Cochrane Library as well as conference abstracts up to December 2016. We assessed change in haemoglobin A 1c , body weight, and incidence of hypoglycaemia and gastrointestinal adverse events. We included eight studies with 5732 participants in the systematic review. Switch from basal insulin to fixed ratio combinations with a glucagon like peptide 1 receptor agonist was associated with 0.72% reduction in haemoglobin A 1c [95% confidence interval -1.03 to -0.41; I 2  = 93%] and 2.35 kg reduction in body weight (95% confidence interval -3.52 to -1.19; I 2  = 93%), reducing also risk for hypoglycaemia [odds ratio 0.70; 95% confidence interval 0.57 to 0.86; I 2  = 85%] but increasing incidence of nausea (odds ratio 6.89; 95% confidence interval 3.73-12.74; I 2  = 79%). Similarly, switching patients from treatment with a glucagon like peptide 1 receptor agonist to a fixed ratio combination with basal insulin was associated with 0.94% reduction in haemoglobin A 1c (95% confidence interval -1.11 to -0.77) and an increase in body weight by 2.89 kg (95% confidence interval 2.17-3.61). Fixed ratio combinations of basal insulin with glucagon like peptide 1 receptor agonists improve glycaemic control whilst balancing out risk for hypoglycaemia and gastrointestinal side effects.

  18. Neonatal Infection in Children With Cerebral Palsy: A Registry-Based Cohort Study.

    PubMed

    Smilga, Anne-Sophie; Garfinkle, Jarred; Ng, Pamela; Andersen, John; Buckley, David; Fehlings, Darcy; Kirton, Adam; Wood, Ellen; van Rensburg, Esias; Shevell, Michael; Oskoui, Maryam

    2018-03-01

    The goal of this study was to explore the association between neonatal infection and outcomes in children with cerebral palsy. We conducted a retrospective cohort study using the Canadian CP Registry. Neonatal infection was defined as meeting one of the following criteria: (1) septicemia, (2) septic shock, or (3) administration of antibiotics for ≥10 days. Phenotypic profiles of children with cerebral palsy with and without an antecedent neonatal infection were compared. Subgroup analysis was performed, stratified by gestational age (term versus preterm). Of the 1229 registry participants, 505 (41.1%) were preterm, and 192 (15.6%) met the criteria for neonatal infection with 29% of preterm children having a neonatal infection compared with 6.5% in term-born children. Children with prior neonatal infection were more likely to have a white matter injury (odds ratio 2.2, 95% confidence interval 1.5 to 3.2), spastic diplegic neurological subtype (odds ratio 1.6, 95% confidence interval 1.1 to 2.3), and sensorineural auditory impairment (odds ratio 2.1, 95% confidence interval 1.4 to 3.3). Among preterm children, neonatal infection was not associated with a difference in phenotypic profile. Term-born children with neonatal infection were more likely to have spastic triplegia or quadriplegia (odds ratio 2.4, 95% confidence interval 1.3 to 4.3), concomitant white matter and cortical injury (odds ratio 4.1, 95% confidence interval 1.6 to 10.3), and more severe gross motor ability (Gross Motor Function Classification System IV to V) (odds ratio 2.6, 95% confidence interval 1.4 to 4.8) compared with preterm children. Findings suggest a role of systemic infection on the developing brain in term-born infants, and the possibility to develop targeted therapeutic and preventive strategies to reduce cerebral palsy morbidity. Copyright © 2017. Published by Elsevier Inc.

  19. T-category remains an important prognostic factor for oropharyngeal carcinoma in the era of human papillomavirus.

    PubMed

    Mackenzie, P; Pryor, D; Burmeister, E; Foote, M; Panizza, B; Burmeister, B; Porceddu, S

    2014-10-01

    To determine prognostic factors for locoregional relapse (LRR), distant relapse and all-cause death in a contemporary cohort of locoregionally advanced oropharyngeal squamous cell carcinoma (OSCC) treated with definitive chemoradiotherapy or radiotherapy alone. OSCC patients treated with definitive radiotherapy between 2005 and 2010 were identified from a prospective head and neck database. Patient age, gender, smoking history, human papillomavirus (HPV) status, T- and N-category, lowest involved nodal level and gross tumour volume of the primary (GTV-p) and nodal (GTV-n) disease were analysed in relation to LRR, distant relapse and death by way of univariate and multivariate analysis. In total, 130 patients were identified, 88 HPV positive, with a median follow-up of 42 months. On multivariate analysis HPV status was a significant predictor of LRR (hazard ratio 0.15; 95% confidence interval 0.05-0.51) and death (hazard ratio 0.29; 95% confidence interval 0.14-0.59) but not distant relapse (hazard ratio 0.53, 95% confidence interval 0.22-1.27). Increasing T-category was associated with a higher risk of LRR (hazard ratio 1.80 for T3/4 versus T1/2; 95% confidence interval 1.08-2.99), death (hazard ratio 1.37, 95% confidence interval 1.06-1.77) and distant relapse (hazard ratio 1.35; 95% confidence interval 1.00-1.83). Increasing GTV-p was associated with increased risk of distant relapse and death. N3 disease and low neck nodes were significant for LRR, distant relapse and death on univariate analysis only. Tumour HPV status was the strongest predictor of LRR and death. T-category is more predictive of distant relapse and may provide additional prognostic value for LRR and death when accounting for HPV status. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  20. The Hospital Anxiety and Depression Scale (HADS) and the 9-item Patient Health Questionnaire (PHQ-9) as screening instruments for depression in patients with cancer.

    PubMed

    Hartung, Tim J; Friedrich, Michael; Johansen, Christoffer; Wittchen, Hans-Ulrich; Faller, Herman; Koch, Uwe; Brähler, Elmar; Härter, Martin; Keller, Monika; Schulz, Holger; Wegscheider, Karl; Weis, Joachim; Mehnert, Anja

    2017-11-01

    Depression screening in patients with cancer is recommended by major clinical guidelines, although the evidence on individual screening tools is limited for this population. Here, the authors assess and compare the diagnostic accuracy of 2 established screening instruments: the depression modules of the 9-item Patient Health Questionnaire (PHQ-9) and the Hospital Anxiety and Depression Scale (HADS-D), in a representative sample of patients with cancer. This multicenter study was conducted with a proportional, stratified, random sample of 2141 patients with cancer across all major tumor sites and treatment settings. The PHQ-9 and HADS-D were assessed and compared in terms of diagnostic accuracy and receiver operating characteristic (ROC) curves for Diagnostic and Statistical Manual of Mental Disorders, 4th edition diagnosis of major depressive disorder using the Composite International Diagnostic Interview for Oncology as the criterion standard. The diagnostic accuracy of the PHQ-9 and HADS-D was fair for diagnosing major depressive disorder, with areas under the ROC curves of 0.78 (95% confidence interval, 0.76-0.79) and 0.75 (95% confidence interval, 0.74-0.77), respectively. The 2 questionnaires did not differ significantly in their areas under the ROC curves (P = .15). The PHQ-9 with a cutoff score ≥7 had the best screening performance, with a sensitivity of 83% (95% confidence interval, 78%-89%) and a specificity of 61% (95% confidence interval, 59%-63%). The American Society of Clinical Oncology guideline screening algorithm had a sensitivity of 44% (95% confidence interval, 36%-51%) and a specificity of 84% (95% confidence interval, 83%-85%). In patients with cancer, the screening performance of both the PHQ-9 and the HADS-D was limited compared with a standardized diagnostic interview. Costs and benefits of routinely screening all patients with cancer should be weighed carefully. Cancer 2017;123:4236-4243. © 2017 American Cancer Society. © 2017 American Cancer Society.

  1. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy: not all AIDS-defining conditions are created equal.

    PubMed

    Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias; May, Margaret; Grabar, Sophie; Furrer, Hansjakob; Sabin, Caroline; Fatkenheuer, Gerd; Justice, Amy; Reiss, Peter; d'Arminio Monforte, Antonella; Gill, John; Hogg, Robert; Bonnet, Fabrice; Kitahata, Mari; Staszewski, Schlomo; Casabona, Jordi; Harris, Ross; Saag, Michael

    2009-04-15

    The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. We analyzed data from 31,620 patients with no prior ADEs who started combination antiretroviral therapy. Cox proportional hazards models were used to estimate mortality hazard ratios for each ADE that occurred in >50 patients, after stratification by cohort and adjustment for sex, HIV transmission group, number of antiretroviral drugs initiated, regimen, age, date of starting combination antiretroviral therapy, and CD4+ cell count and HIV RNA load at initiation of combination antiretroviral therapy. ADEs that occurred in <50 patients were grouped together to form a "rare ADEs" category. During a median follow-up period of 43 months (interquartile range, 19-70 months), 2880 ADEs were diagnosed in 2262 patients; 1146 patients died. The most common ADEs were esophageal candidiasis (in 360 patients), Pneumocystis jiroveci pneumonia (320 patients), and Kaposi sarcoma (308 patients). The greatest mortality hazard ratio was associated with non-Hodgkin's lymphoma (hazard ratio, 17.59; 95% confidence interval, 13.84-22.35) and progressive multifocal leukoencephalopathy (hazard ratio, 10.0; 95% confidence interval, 6.70-14.92). Three groups of ADEs were identified on the basis of the ranked hazard ratios with bootstrapped confidence intervals: severe (non-Hodgkin's lymphoma and progressive multifocal leukoencephalopathy [hazard ratio, 7.26; 95% confidence interval, 5.55-9.48]), moderate (cryptococcosis, cerebral toxoplasmosis, AIDS dementia complex, disseminated Mycobacterium avium complex, and rare ADEs [hazard ratio, 2.35; 95% confidence interval, 1.76-3.13]), and mild (all other ADEs [hazard ratio, 1.47; 95% confidence interval, 1.08-2.00]). In the combination antiretroviral therapy era, mortality rates subsequent to an ADE depend on the specific diagnosis. The proposed classification of ADEs may be useful in clinical end point trials, prognostic studies, and patient management.

  2. Out-of-range INR values and outcomes among new warfarin patients with non-valvular atrial fibrillation.

    PubMed

    Nelson, Winnie W; Wang, Li; Baser, Onur; Damaraju, Chandrasekharrao V; Schein, Jeffrey R

    2015-02-01

    Although efficacious in stroke prevention in non-valvular atrial fibrillation, many warfarin patients are sub-optimally managed. To evaluate the association of international normalized ratio control and clinical outcomes among new warfarin patients with non-valvular atrial fibrillation. Adult non-valvular atrial fibrillation patients (≥18 years) initiating warfarin treatment were selected from the US Veterans Health Administration dataset between 10/2007 and 9/2012. Valid international normalized ratio values were examined from the warfarin initiation date through the earlier of the first clinical outcome, end of warfarin exposure or death. Each patient contributed multiple in-range and out-of-range time periods. The relative risk ratios of clinical outcomes associated with international normalized ratio control were estimated. 34,346 patients were included for analysis. During the warfarin exposure period, the incidence of events per 100 person-years was highest when patients had international normalized ratio <2:13.66 for acute coronary syndrome; 10.30 for ischemic stroke; 2.93 for transient ischemic attack; 1.81 for systemic embolism; and 4.55 for major bleeding. Poisson regression confirmed that during periods with international normalized ratio <2, patients were at increased risk of developing acute coronary syndrome (relative risk ratio: 7.9; 95 % confidence interval 6.9-9.1), ischemic stroke (relative risk ratio: 7.6; 95 % confidence interval 6.5-8.9), transient ischemic attack (relative risk ratio: 8.2; 95 % confidence interval 6.1-11.2), systemic embolism (relative risk ratio: 6.3; 95 % confidence interval 4.4-8.9) and major bleeding (relative risk ratio: 2.6; 95 % confidence interval 2.2-3.0). During time periods with international normalized ratio >3, patients had significantly increased risk of major bleeding (relative risk ratio: 1.5; 95 % confidence interval 1.2-2.0). In a Veterans Health Administration non-valvular atrial fibrillation population, exposure to out-of-range international normalized ratio values was associated with significantly increased risk of adverse clinical outcomes.

  3. The prevalence of diagnosed tourette syndrome in Canada: A national population-based study.

    PubMed

    Yang, Jaeun; Hirsch, Lauren; Martino, Davide; Jette, Nathalie; Roberts, Jodie; Pringsheim, Tamara

    2016-11-01

    The objective of this study was to examine: (1) the prevalence of diagnosed Tourette syndrome in Canada by sex in youth (aged 12-17) and adults and (2) socioeconomic factors in this population. The majority of epidemiological studies of tics have focused on children and youth, with few studies describing the prevalence of tics in adult populations. Canadian data on Tourette syndrome prevalence were derived from the Canadian Community Health Survey 2010 and 2011 cycles, a Statistics Canada population-based cross-sectional survey that collects information related to health status. We determined the prevalence of diagnosed Tourette syndrome and examined sociodemographic factors, including age, sex, education, income, employment, and birthplace. Overall, 122,884 Canadians participated in the surveys, with 122 participants diagnosed with Tourette syndrome. The prevalence of Tourette syndrome was higher in males in youth: 6.03 per 1000 (95% confidence interval: 3.24-8.81) in males versus 0.48 per 1,000 (95% confidence interval: 0.05-0.91) in females, with a prevalence risk ratio of 5.31 (95% confidence interval: 2.38-11.81). In adults, the prevalence of Tourette syndrome was 0.89 per 1,000 (95% confidence interval: 0.48-1.29) in males versus 0.44 (95% confidence interval: 0.16.0-0.71) in females, with a prevalence risk ratio of 1.93 (95% confidence interval: 1.21-3.08). After adjusting for age and sex, adults with Tourette syndrome had lower odds of receiving postsecondary education or being employed and higher odds of having income lower than the median and receiving governmental support. Data on the prevalence of Tourette syndrome in adults are scarce because most studies focus on children. Our data demonstrate a decreasing prevalence risk ratio for sex in adults compared to children. A diagnosis of Tourette syndrome is associated with lower education, income, and employment in adulthood. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  4. Stroke preparedness in children: translating knowledge into behavioral intent: a systematic review and meta-analysis.

    PubMed

    Ottawa, Cassandra; Sposato, Luciano A; Nabbouh, Fadl; Saposnik, Gustavo

    2015-10-01

    If translated into behavioral intent, improving stroke knowledge may potentially impact on better outcomes. Children are an attractive target population since they can drive familial behavioral changes. However, the impact of interventions on stroke knowledge among children is unclear. We performed a systematic review and meta-analysis to investigate whether educational interventions targeting children improve stroke knowledge and lead to behavioral changes. We searched Ovid, PubMed, and Embase between January 2000 and December 2014. We included studies written in English reporting the number of children aged 6-15 years undergoing educational interventions on stroke and providing the results for baseline and early and late postintervention tests. We compared the proportion of correct answers between baseline, early, and late responses for two endpoints: knowledge and behavioral intent. Of the initial 58 articles found, we included nine that met the inclusion criteria. Compared with baseline tests (51·7%, 95% confidence interval 40·9-62·4), there was improvement in stroke knowledge in early (74·0%, 95% confidence interval 64·4-82·5, P = 0·002) and late (67·3%, 95% confidence interval 55·4-78·2, P = 0·027) responses. There was improvement in the early (92·1%, 95% confidence interval 86·0-96·6, P < 0·001) and late (83·9%, 95% confidence interval 73·5-92·1, P = 0·001) responses for behavioral intent compared with the baseline assessment (63·8%, 95% confidence interval 53·5-73·4). Children are a potentially attractive target population for improvement in stroke knowledge and behavioral intent, both in the short and long term. Our findings may support the implementation of large-scale stroke educational initiatives targeting children. © 2015 World Stroke Organization.

  5. Suicide in patients with gastric cancer: a population-based study.

    PubMed

    Sugawara, Akitomo; Kunieda, Etsuo

    2016-09-01

    We conducted this study to examine the rate of suicide in patients with gastric cancer and to identify factors associated with increased risk of suicide using the Surveillance, Epidemiology, and End Results database. The database was queried for patients who were diagnosed with gastric cancer from 1998 to 2011. The rate of suicide and standardized mortality ratio were calculated. Multivariable analyses were conducted to identify factors associated with increased risk of suicide. A total of 65 535 patients with 109 597 person-years of follow-up were included. A total of 68 patients died of suicide. The age-adjusted rate of suicide was 34.6 per 100 000 person-years (standardized mortality ratios, 4.07; 95% confidence interval, 3.18-5.13). The rate of suicide was highest within the first 3 months after cancer diagnosis (standardized mortality ratios, 67.67; 95% confidence interval, 40.74-106.15). Results of multivariable analyses showed that male sex (incidence rate ratio, 7.15; 95% confidence interval, 3.05-16.78; P < 0.0001), White race (incidence rate ratio, 3.23; 95% confidence interval, 1.00-10.35; P = 0.0491), unmarried status (incidence rate ratio, 2.01; 95% confidence interval, 1.22-3.30; P = 0.0060) and distant stage disease (incidence rate ratio, 2.90; 95% confidence interval, 1.72-4.92; P < 0.0001) were significantly associated with increased risk of suicide. Patients with gastric cancer have an ~4-fold higher risk of suicide compared with the general US population. The suicide risk is highest within the first 3 months after diagnosis. Male sex, White race, unmarried status and distant stage disease are significantly associated with increased risk of suicide. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Integrating palliative care across settings: A retrospective cohort study of a hospice home care programme for cancer patients.

    PubMed

    Tan, Woan Shin; Lee, Angel; Yang, Sze Yee; Chan, Susan; Wu, Huei Yaw; Ng, Charis Wei Ling; Heng, Bee Hoon

    2016-07-01

    Terminally ill patients at the end-of-life do transit between care settings due to their complex care needs. Problems of care fragmentation could result in poor quality of care. We aimed to evaluate the impact of an integrated hospice home care programme on acute care service usage and on the share of home deaths. The retrospective study cohort comprised patients who were diagnosed with cancer, had an expected prognosis of 1 year or less, and were referred to a home hospice. The intervention group comprised deceased patients enrolled in the integrated hospice home care programme between September 2012 and June 2014. The historical comparison group comprised deceased patients who were referred to other home hospices between January 2007 and January 2011. There were 321 cases and 593 comparator subjects. Relative to the comparator group, the share of hospital deaths was significantly lower for programme participants (12.1% versus 42.7%). After adjusting for differences at baseline, the intervention group had statistically significantly lower emergency department visits at 30 days (incidence rate ratio: 0.38; 95% confidence interval: 0.31-0.47), 60 days (incidence rate ratio: 0.61; 95% confidence interval: 0.54-0.69) and 90 days (incidence rate ratio: 0.69; 95% confidence interval: 0.62-0.77) prior to death. Similar results held for the number of hospitalisations at 30 days (incidence rate ratio: 0.48; 95% confidence interval: 0.40-0.58), 60 days (incidence rate ratio: 0.71; 95% confidence interval: 0.62-0.82) and 90 days (incidence rate ratio: 0.77; 95% confidence interval: 0.68-0.88) prior to death. Our results demonstrated that by integrating services between acute care and home hospice care, a reduction in acute care service usage could occur. © The Author(s) 2016.

  7. Prevalence of dry eye syndrome in an adult population.

    PubMed

    Hashemi, Hassan; Khabazkhoob, Mehdi; Kheirkhah, Ahmad; Emamian, Mohammad Hassan; Mehravaran, Shiva; Shariati, Mohammad; Fotouhi, Akbar

    2014-04-01

    To determine the prevalence of dry eye syndrome in the general 40- to 64-year-old population of Shahroud, Iran. Population-based cross-sectional study. Through cluster sampling, 6311 people were selected and 5190 participated. Assessment of dry eye was done in a random subsample of 1008 people. Subjective assessment for dry eye syndrome was performed using Ocular Surface Disease Index questionnaire. In addition, the following objective tests of dry eye syndrome were employed: Schirmer test, tear break-up time, and fluorescein and Rose Bengal staining using the Oxford grading scheme. Those with an Ocular Surface Disease Index score ≥23 were considered symptomatic, and dry eye syndrome was defined as having symptoms and at least one positive objective sign. The prevalence of dry eye syndrome was 8.7% (95% confidence interval 6.9-10.6). Assessment of signs showed an abnormal Schirmer score in 17.8% (95% confidence interval 15.5-20.0), tear break-up time in 34.2% (95% confidence interval 29.5-38.8), corneal fluorescein staining (≥1) in 11.3% (95% confidence interval 8.5-14.1) and Rose Bengal staining (≥3 for cornea and/or conjunctiva) in 4.9% (95% confidence interval 3.4-6.5). According to the Ocular Surface Disease Index scores, 18.3% (95% confidence interval 15.9-20.6) had dry eye syndrome symptoms. The prevalence of dry eye syndrome was significantly higher in women (P = 0.010) and not significantly associated with age (P = 0.291). The objective dry eye syndrome signs significantly increased with age. Based on the findings, the prevalence of dry eye syndrome in the studied population is in the mid-range. The prevalence is higher in women. Also, objective tests tend to turn abnormal at higher age. Pterygium is associated with dry eye syndrome and increased its symptoms. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  8. The diagnostic value of narrow-band imaging for early and invasive lung cancer: a meta-analysis.

    PubMed

    Zhu, Juanjuan; Li, Wei; Zhou, Jihong; Chen, Yuqing; Zhao, Chenling; Zhang, Ting; Peng, Wenjia; Wang, Xiaojing

    2017-07-01

    This study aimed to compare the ability of narrow-band imaging to detect early and invasive lung cancer with that of conventional pathological analysis and white-light bronchoscopy. We searched the PubMed, EMBASE, Sinomed, and China National Knowledge Infrastructure databases for relevant studies. Meta-disc software was used to perform data analysis, meta-regression analysis, sensitivity analysis, and heterogeneity testing, and STATA software was used to determine if publication bias was present, as well as to calculate the relative risks for the sensitivity and specificity of narrow-band imaging vs those of white-light bronchoscopy for the detection of early and invasive lung cancer. A random-effects model was used to assess the diagnostic efficacy of the above modalities in cases in which a high degree of between-study heterogeneity was noted with respect to their diagnostic efficacies. The database search identified six studies including 578 patients. The pooled sensitivity and specificity of narrow-band imaging were 86% (95% confidence interval: 83-88%) and 81% (95% confidence interval: 77-84%), respectively, and the pooled sensitivity and specificity of white-light bronchoscopy were 70% (95% confidence interval: 66-74%) and 66% (95% confidence interval: 62-70%), respectively. The pooled relative risks for the sensitivity and specificity of narrow-band imaging vs the sensitivity and specificity of white-light bronchoscopy for the detection of early and invasive lung cancer were 1.33 (95% confidence interval: 1.07-1.67) and 1.09 (95% confidence interval: 0.84-1.42), respectively, and sensitivity analysis showed that narrow-band imaging exhibited good diagnostic efficacy with respect to detecting early and invasive lung cancer and that the results of the study were stable. Narrow-band imaging was superior to white light bronchoscopy with respect to detecting early and invasive lung cancer; however, the specificities of the two modalities did not differ significantly.

  9. Antecedents and neuroimaging patterns in cerebral palsy with epilepsy and cognitive impairment: a population-based study in children born at term.

    PubMed

    Ahlin, Kristina; Jacobsson, Bo; Nilsson, Staffan; Himmelmann, Kate

    2017-07-01

    Antecedents of accompanying impairments in cerebral palsy and their relation to neuroimaging patterns need to be explored. A population-based study of 309 children with cerebral palsy born at term between 1983 and 1994. Prepartum, intrapartum, and postpartum variables previously studied as antecedents of cerebral palsy type and motor severity were analyzed in children with cerebral palsy and cognitive impairment and/or epilepsy, and in children with cerebral palsy without these accompanying impairments. Neuroimaging patterns and their relation to identified antecedents were analyzed. Data were retrieved from the cerebral palsy register of western Sweden, and from obstetric and neonatal records. Children with cerebral palsy and accompanying impairments more often had low birthweight (kg) (odds ratio 0.5, 95% confidence interval 0.3-0.8), brain maldevelopment known at birth (p = 0.007, odds ratio ∞) and neonatal infection (odds ratio 5.4, 95% confidence interval 1.04-28.4). Moreover, neuroimaging patterns of maldevelopment (odds ratio 7.2, 95% confidence interval 2.9-17.2), cortical/subcortical lesions (odds ratio 5.3, 95% confidence interval 2.3-12.2) and basal ganglia lesions (odds ratio 7.6, 95% confidence interval 1.4-41.3) were more common, wheras white matter injury was found significantly less often (odds ratio 0.2, 95% confidence interval 0.1-0.5). In most children with maldevelopment, the intrapartum and postpartum periods were uneventful (p < 0.05). Cerebral maldevelopment was associated with prepartum antecedents, whereas subcortical/cortical and basal ganglia lesions were associated with intrapartum and postpartum antecedents. No additional factor other than those related to motor impairment was associated with epilepsy and cognitive impairment in cerebral palsy. Timing of antecedents deemed important for the development of cerebral palsy with accompanying impairments were supported by neuroimaging patterns. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  10. Breast cancer biology varies by method of detection and may contribute to overdiagnosis.

    PubMed

    Hayse, Brandon; Hooley, Regina J; Killelea, Brigid K; Horowitz, Nina R; Chagpar, Anees B; Lannin, Donald R

    2016-08-01

    Recently, it has been suggested that screening mammography may result in some degree of overdiagnosis (ie, detection of breast cancers that would never become clinically important within the lifespan of the patient). The extent and biology of these overdiagnosed cancers, however, is not well understood, and the effect of newer screening modalities on overdiagnosis is unknown. We performed a retrospective review of a prospectively collected database of breast cancers diagnosed at the Yale Breast Center from 2004-2014. The mode of initial presentation was categorized into 5 groups: screening mammogram, screening magnetic resonance imaging, screening ultrasonography, self-detected masses, and physician-detected masses. Compared with cancers presenting with masses, cancers detected by image-based screening were more likely to present with ductal carcinoma-in-situ or T1 cancers (P < .001). In addition to a simple stage shift, however, cancers detected by image-based screening were also more likely to be luminal and low-grade cancers; symptomatic cancers were more likely high-grade and triple-negative (P < .001, respectively). On a multivariate analysis, adjusting for age, race, and tumor size, cancers detected by mammogram, US, and magnetic resonance imaging had greater odds of being luminal (odds ratio 1.8, 95% confidence interval, 1.5-2.3; odds ratio 2.2, 95% confidence interval, 1.1-4.7; and odds ratio 4.7, 95% confidence interval, 2.1-10.6, respectively), and low-grade (odds ratio 2.2, 95% confidence interval, 1.6-2.9; odds ratio 4.9, 95% confidence interval, 2.7-8.9; and odds ratio 4.6, 95% confidence interval, 2.6-8.1, respectively) compared with cancers presenting with self-detected masses. Screening detects cancers with more indolent biology, potentially contributing to the observed rate of overdiagnosis. With magnetic resonance imaging and US being used more commonly for screening, the rate of overdiagnosis may increase further. Copyright © 2016. Published by Elsevier Inc.

  11. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation.

    PubMed

    Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L

    2016-02-10

    Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Peritoneal Dialysis Access Revision in Children: Causes, Interventions, and Outcomes.

    PubMed

    Borzych-Duzalka, Dagmara; Aki, T Fazil; Azocar, Marta; White, Colin; Harvey, Elizabeth; Mir, Sevgi; Adragna, Marta; Serdaroglu, Erkin; Sinha, Rajiv; Samaille, Charlotte; Vanegas, Juan Jose; Kari, Jameela; Barbosa, Lorena; Bagga, Arvind; Galanti, Monica; Yavascan, Onder; Leozappa, Giovanna; Szczepanska, Maria; Vondrak, Karel; Tse, Kei-Chiu; Schaefer, Franz; Warady, Bradley A

    2017-01-06

    Little published information is available about access failure in children undergoing chronic peritoneal dialysis. Our objectives were to evaluate frequency, risk factors, interventions, and outcome of peritoneal dialysis access revision. Data were derived from 824 incident and 1629 prevalent patients from 105 pediatric nephrology centers enrolled in the International Pediatric Peritoneal Dialysis Network Registry between 2007 and 2015. In total, 452 access revisions were recorded in 321 (13%) of 2453 patients over 3134 patient-years of follow-up, resulting in an overall access revision rate of 0.14 per treatment year. Among 824 incident patients, 186 (22.6%) underwent 188 access revisions over 1066 patient-years, yielding an access revision rate of 0.17 per treatment year; 83% of access revisions in incident patients were reported within the first year of peritoneal dialysis treatment. Catheter survival rates in incident patients were 84%, 80%, 77%, and 73% at 12, 24, 36, and 48 months, respectively. By multivariate logistic regression analysis, risk of access revision was associated with younger age (odds ratio, 0.93; 95% confidence interval, 0.92 to 0.95; P<0.001), diagnosis of congenital anomalies of the kidney and urinary tract (odds ratio, 1.28; 95% confidence interval, 1.03 to 1.59; P=0.02), coexisting ostomies (odds ratio, 1.42; 95% confidence interval, 1.07 to 1.87; P=0.01), presence of swan neck tunnel with curled intraperitoneal portion (odds ratio, 1.30; 95% confidence interval, 1.04 to 1.63; P=0.02), and high gross national income (odds ratio, 1.10; 95% confidence interval, 1.02 to 1.19; P=0.01). Main reasons for access revisions included mechanical malfunction (60%), peritonitis (16%), exit site infection (12%), and leakage (6%). Need for access revision increased the risk of peritoneal dialysis technique failure or death (hazard ratio, 1.35; 95% confidence interval, 1.10 to 1.65; P=0.003). Access dysfunction due to mechanical causes doubled the risk of technique failure compared with infectious causes (hazard ratio, 1.95; 95% confidence interval, 1.20 to 2.30; P=0.03). Peritoneal dialysis catheter revisions are common in pediatric patients on peritoneal dialysis and complicate provision of chronic peritoneal dialysis. Attention to potentially modifiable risk factors by pediatric nephrologists and pediatric surgeons should be encouraged. Copyright © 2016 by the American Society of Nephrology.

  13. Peritoneal Dialysis Access Revision in Children: Causes, Interventions, and Outcomes

    PubMed Central

    Aki, T. Fazil; Azocar, Marta; White, Colin; Harvey, Elizabeth; Mir, Sevgi; Adragna, Marta; Serdaroglu, Erkin; Sinha, Rajiv; Samaille, Charlotte; Vanegas, Juan Jose; Kari, Jameela; Barbosa, Lorena; Bagga, Arvind; Galanti, Monica; Yavascan, Onder; Leozappa, Giovanna; Szczepanska, Maria; Vondrak, Karel; Tse, Kei-Chiu; Schaefer, Franz; Warady, Bradley A.

    2017-01-01

    Background and objectives Little published information is available about access failure in children undergoing chronic peritoneal dialysis. Our objectives were to evaluate frequency, risk factors, interventions, and outcome of peritoneal dialysis access revision. Design, setting, participants, & measurements Data were derived from 824 incident and 1629 prevalent patients from 105 pediatric nephrology centers enrolled in the International Pediatric Peritoneal Dialysis Network Registry between 2007 and 2015. Results In total, 452 access revisions were recorded in 321 (13%) of 2453 patients over 3134 patient-years of follow-up, resulting in an overall access revision rate of 0.14 per treatment year. Among 824 incident patients, 186 (22.6%) underwent 188 access revisions over 1066 patient-years, yielding an access revision rate of 0.17 per treatment year; 83% of access revisions in incident patients were reported within the first year of peritoneal dialysis treatment. Catheter survival rates in incident patients were 84%, 80%, 77%, and 73% at 12, 24, 36, and 48 months, respectively. By multivariate logistic regression analysis, risk of access revision was associated with younger age (odds ratio, 0.93; 95% confidence interval, 0.92 to 0.95; P<0.001), diagnosis of congenital anomalies of the kidney and urinary tract (odds ratio, 1.28; 95% confidence interval, 1.03 to 1.59; P=0.02), coexisting ostomies (odds ratio, 1.42; 95% confidence interval, 1.07 to 1.87; P=0.01), presence of swan neck tunnel with curled intraperitoneal portion (odds ratio, 1.30; 95% confidence interval, 1.04 to 1.63; P=0.02), and high gross national income (odds ratio, 1.10; 95% confidence interval, 1.02 to 1.19; P=0.01). Main reasons for access revisions included mechanical malfunction (60%), peritonitis (16%), exit site infection (12%), and leakage (6%). Need for access revision increased the risk of peritoneal dialysis technique failure or death (hazard ratio, 1.35; 95% confidence interval, 1.10 to 1.65; P=0.003). Access dysfunction due to mechanical causes doubled the risk of technique failure compared with infectious causes (hazard ratio, 1.95; 95% confidence interval, 1.20 to 2.30; P=0.03). Conclusions Peritoneal dialysis catheter revisions are common in pediatric patients on peritoneal dialysis and complicate provision of chronic peritoneal dialysis. Attention to potentially modifiable risk factors by pediatric nephrologists and pediatric surgeons should be encouraged. PMID:27899416

  14. Toxoplasma gondii exposure and epilepsy: A matched case-control study in a public hospital in northern Mexico.

    PubMed

    Alvarado-Esquivel, Cosme; Rico-Almochantaf, Yazmin Del Rosario; Hernández-Tinoco, Jesús; Quiñones-Canales, Gerardo; Sánchez-Anguiano, Luis Francisco; Torres-González, Jorge; Ramírez-Valles, Eda Guadalupe; Minjarez-Veloz, Andrea

    2018-01-01

    This study aimed to determine the association between infection with Toxoplasma gondii and epilepsy in patients attended to in a public hospital in the northern Mexican city of Durango. We performed an age- and gender-matched case-control study of 99 patients suffering from epilepsy and 99 without epilepsy. Sera of participants were analyzed for anti- T. gondii IgG and IgM antibodies using commercially available enzyme-linked immunoassays. Seropositive samples to T. gondii were further analyzed for detection of T. gondii DNA by polymerase chain reaction. Anti- T. gondii IgG antibodies were found in 10 (10.1%) of the 99 cases and in 6 (6.1%) of the 99 controls (odds ratio = 1.74; 95% confidence interval: 0.60-4.99; p = 0.43). High (> 150 IU/mL) levels of anti- T. gondii IgG antibodies were found in 6 of the 99 cases and in 4 of the 99 controls (odds ratio = 1.53; 95% confidence interval: 0.41-5.60; p = 0.74). Anti- T. gondii IgM antibodies were found in 2 of the 10 IgG seropositive cases, and in 2 of the 6 IgG seropositive controls (odds ratio = 0.50; 95% confidence interval: 0.05-4.97; p = 0.60). T. gondii DNA was not found in any of the 10 anti- T. gondii IgG positive patients. Bivariate analysis of IgG seropositivity to T. gondii and International Statistical Classification of Diseases and related Health Problems, 10th Edition codes of epilepsy showed an association between seropositivity and G40.1 code (odds ratio = 22.0; 95% confidence interval: 2.59-186.5; p = 0.008). Logistic regression analysis showed an association between T. gondii infection and consumption of goat meat (odds ratio = 6.5; 95% confidence interval: 1.22-34.64; p = 0.02), unwashed raw vegetables (odds ratio = 26.3; 95% confidence interval: 2.61-265.23; p = 0.006), and tobacco use (odds ratio = 6.2; 95% confidence interval: 1.06-36.66; p = 0.04). Results suggest that T. gondii infection does not increase the risk of epilepsy in our setting; however, infection might be linked to specific types of epilepsy. Factors associated with T. gondii infection found in this study may aid in the design of preventive measures against toxoplasmosis.

  15. Association of protein tyrosine phosphatase, non-receptor type 22 +1858C→T polymorphism and susceptibility to vitiligo: Systematic review and meta-analysis.

    PubMed

    Agarwal, Silky; Changotra, Harish

    2017-01-01

    Protein tyrosine phosphatase, non-receptor type 22 gene, which translates to lymphoid tyrosine phosphatase, is considered to be a susceptibility gene marker associated with several autoimmune diseases. Several studies have demonstrated the association of protein tyrosine phosphatase, non-receptor type 22 +1858C→T polymorphism with vitiligo. However, these studies showed conflicting results. Meta-analysis of the same was conducted earlier that included fewer number of publications in their study. We performed a meta-analysis of a total of seven studies consisting of 2094 cases and 3613 controls to evaluate the possible association of protein tyrosine phosphatase, non-receptor type 22 +1858C>T polymorphism with vitiligo susceptibility. We conducted a literature search in PubMed, Google Scholar and Dogpile for all published paper on protein tyrosine phosphatase, non-receptor type 22 +1858C→T polymorphism and vitiligo risk till June 2016. Data analysis was performed by RevMan 5.3 and comprehensive meta-analysis v3.0 software. Meta-analysis showed an overall significant association of protein tyrosine phosphatase, non- receptor type 22 +1858C→T polymorphism with vitiligo in all models (allelic model [T vs. C]: odds ratio = 1.50, 95% confidence interval [1.32-1.71], P< 0.001; dominant model [TT + CT vs. CC]: odds ratio = 1.61, 95% confidence interval [1.16-2.24], P = 0.004; recessive model [TT vs. CT + CC]: odds ratio = 4.82, 95% confidence interval [1.11-20.92], P = 0.04; homozygous model [TT vs. CC]: odds ratio = 5.34, 95% confidence interval [1.23-23.24], P = 0.03; co-dominant model [CT vs. CC]: odds ratio = 1.52, 95% confidence interval [1.09-2.13], P = 0.01). No publication bias was detected in the funnel plot study. Limited ethnic-based studies, unable to satisfy data by gender or vitiligo-type are some limitations of the present meta-analysis. Stratifying data by ethnicity showed an association of protein tyrosine phosphatase, non-receptor type 22 +1858C→T with vitiligo in European population (odds ratio = 1.53, 95% confidence interval [1.34-1.75], P< 0.001) but not in Asian population (odds ratio = 0.59, 95% confidence interval [0.26-1.32], P = 0.2). In conclusion, protein tyrosine phosphatase, non-receptor type 22 +1858 T allele predisposes European individuals to vitiligo.

  16. A Bayesian approach to identifying structural nonlinearity using free-decay response: Application to damage detection in composites

    USGS Publications Warehouse

    Nichols, J.M.; Link, W.A.; Murphy, K.D.; Olson, C.C.

    2010-01-01

    This work discusses a Bayesian approach to approximating the distribution of parameters governing nonlinear structural systems. Specifically, we use a Markov Chain Monte Carlo method for sampling the posterior parameter distributions thus producing both point and interval estimates for parameters. The method is first used to identify both linear and nonlinear parameters in a multiple degree-of-freedom structural systems using free-decay vibrations. The approach is then applied to the problem of identifying the location, size, and depth of delamination in a model composite beam. The influence of additive Gaussian noise on the response data is explored with respect to the quality of the resulting parameter estimates.

  17. Characterization of Myocardial Repolarization Reserve in Adolescent Females With Anorexia Nervosa.

    PubMed

    Padfield, Gareth J; Escudero, Carolina A; DeSouza, Astrid M; Steinberg, Christian; Gibbs, Karen; Puyat, Joseph H; Lam, Pei Yoong; Sanatani, Shubhayan; Sherwin, Elizabeth; Potts, James E; Sandor, George; Krahn, Andrew D

    2016-02-09

    Patients with anorexia nervosa exhibit abnormal myocardial repolarization and are susceptible to sudden cardiac death. Exercise testing is useful in unmasking QT prolongation in disorders associated with abnormal repolarization. We characterized QT adaptation during exercise in anorexia. Sixty-one adolescent female patients with anorexia nervosa and 45 age- and sex-matched healthy volunteers performed symptom-limited cycle ergometry during 12-lead ECG monitoring. Changes in the QT interval during exercise were measured, and QT/RR-interval slopes were determined by using mixed-effects regression modeling. Patients had significantly lower body mass index than controls; however, resting heart rates and QT/QTc intervals were similar at baseline. Patients had shorter exercise times (13.7±4.5 versus 20.6±4.5 minutes; P<0.001) and lower peak heart rates (159±20 versus 184±9 beats/min; P<0.001). The mean QTc intervals were longer at peak exercise in patients (442±29 versus 422±19 ms; P<0.001). During submaximal exertion at comparable heart rates (114±6 versus 115±11 beats/min; P=0.54), the QTc interval had prolonged significantly more in patients than controls (37±28 versus 24±25 ms; P<0.016). The RR/QT slope, best described by a curvilinear relationship, was more gradual in patients than in controls (13.4; 95% confidence interval, 12.8-13.9 versus 15.8; 95% confidence interval, 15.3-16.4 ms QT change per 10% change in RR interval; P<0.001) and steepest in patients within the highest body mass index tertile versus the lowest (13.9; 95% confidence interval, 12.9-14.9 versus 12.3; 95% confidence interval, 11.3-13.3; P=0.026). Despite the absence of manifest QT prolongation, adolescent anorexic females have impaired repolarization reserve in comparison with healthy controls. Further study may identify impaired QT dynamics as a risk factor for arrhythmias in anorexia nervosa. © 2016 American Heart Association, Inc.

  18. Accuracy assessment of percent canopy cover, cover type, and size class

    Treesearch

    H. T. Schreuder; S. Bain; R. C. Czaplewski

    2003-01-01

    Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...

  19. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    ERIC Educational Resources Information Center

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  20. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  1. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  2. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    ERIC Educational Resources Information Center

    Wiens, Stefan; Nilsson, Mats E.

    2017-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…

  3. A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data

    PubMed Central

    Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2016-01-01

    Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method’s performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. PMID:26209598

  4. A Comparison of Metamodeling Techniques via Numerical Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2016-01-01

    This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.

  5. Assessment and Classification of Service Learning: A Case Study of CS/EE Students

    PubMed Central

    Wang, Yu-Tseng; Lai, Pao-Lien; Chen, Jen-Yeu

    2014-01-01

    This study investigates the undergraduate students in computer science/electric engineering (CS/EE) in Taiwan to measure their perceived benefits from the experiences in service learning coursework. In addition, the confidence of their professional disciplines and its correlation with service learning experiences are examined. The results show that students take positive attitudes toward service learning and their perceived benefits from service learning are correlated with their confidence in professional disciplines. Furthermore, this study designs the knowledge model by Bayesian network (BN) classifiers and term frequency-inverse document frequency (TFIDF) for counseling students on the optimal choice of service learning. PMID:25295294

  6. Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective

    USGS Publications Warehouse

    Barker, Richard J.; Link, William A.

    2015-01-01

    Statistical inference begins with viewing data as realizations of stochastic processes. Mathematical models provide partial descriptions of these processes; inference is the process of using the data to obtain a more complete description of the stochastic processes. Wildlife and ecological scientists have become increasingly concerned with the conditional nature of model-based inference: what if the model is wrong? Over the last 2 decades, Akaike's Information Criterion (AIC) has been widely and increasingly used in wildlife statistics for 2 related purposes, first for model choice and second to quantify model uncertainty. We argue that for the second of these purposes, the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting. Moreover, Bayesian arguments provide the sole justification for interpreting model weights (including AIC weights) as coherent (mathematically self consistent) model probabilities. This interpretation requires treating the model as an exact description of the data-generating mechanism. We discuss the implications of this assumption, and conclude that more emphasis is needed on model checking to provide confidence in the quality of inference.

  7. Uncertain deduction and conditional reasoning.

    PubMed

    Evans, Jonathan St B T; Thompson, Valerie A; Over, David E

    2015-01-01

    There has been a paradigm shift in the psychology of deductive reasoning. Many researchers no longer think it is appropriate to ask people to assume premises and decide what necessarily follows, with the results evaluated by binary extensional logic. Most every day and scientific inference is made from more or less confidently held beliefs and not assumptions, and the relevant normative standard is Bayesian probability theory. We argue that the study of "uncertain deduction" should directly ask people to assign probabilities to both premises and conclusions, and report an experiment using this method. We assess this reasoning by two Bayesian metrics: probabilistic validity and coherence according to probability theory. On both measures, participants perform above chance in conditional reasoning, but they do much better when statements are grouped as inferences, rather than evaluated in separate tasks.

  8. Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad

    2016-02-01

    Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less

  9. Balmer Filaments in Tycho’s Supernova Remnant: An Interplay between Cosmic-ray and Broad-neutral Precursors

    NASA Astrophysics Data System (ADS)

    Knežević, Sladjana; Läsker, Ronald; van de Ven, Glenn; Font, Joan; Raymond, John C.; Bailer-Jones, Coryn A. L.; Beckman, John; Morlino, Giovanni; Ghavamian, Parviz; Hughes, John P.; Heng, Kevin

    2017-09-01

    We present Hα spectroscopic observations and detailed modeling of the Balmer filaments in the supernova remnant (SNR) Tycho (SN 1572). We used GH α FaS (Galaxy Hα Fabry-Pérot Spectrometer) on the William Herschel Telescope with a 3.‧4 × 3.‧4 field of view, 0.″2 pixel scale, and {σ }{instr}=8.1 km s-1 resolution at 1″ seeing for ˜10 hr, resulting in 82 spatial-spectral bins that resolve the narrow Hα line in the entire SN 1572 northeastern rim. For the first time, we can therefore mitigate artificial line broadening from unresolved differential motion and probe Hα emission parameters in varying shock and ambient medium conditions. Broad Hα line remains unresolved within spectral coverage of 392 km s-1. We employed Bayesian inference to obtain reliable parameter confidence intervals and to quantify the evidence for models with multiple line components. The median Hα narrow-line (NL) FWHM of all bins and models is {W}{NL}=(54.8+/- 1.8) km s-1 at the 95% confidence level, varying within [35, 72] km s-1 between bins and clearly broadened compared to the intrinsic (thermal) ≈20 km s-1. Possible line splits are accounted for, significant in ≈ 18 % of the filament, and presumably due to remaining projection effects. We also find widespread evidence for intermediate-line emission of a broad-neutral precursor, with a median {W}{IL}=(180+/- 14) km s-1 (95% confidence). Finally, we present a measurement of the remnant’s systemic velocity, {V}{LSR}=-34 km s-1, and map differential line-of-sight motions. Our results confirm the existence and interplay of shock precursors in Tycho’s remnant. In particular, we show that suprathermal NL emission is near-universal in SN 1572, and that, in the absence of an alternative explanation, collisionless SNR shocks constitute a viable acceleration source for Galactic TeV cosmic-ray protons.

  10. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  11. Ecological persistence in the Late Mississippian (Serpukhovian, Namurian A) Megafloral record of the Upper Silesian Basin, Czech Republic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastaldo, R.A.; Purkynova, E.; Simunek, Z.

    2009-05-15

    The Serpukhovian (Namurian A) stratigraphy of the Ostrava Formation, Upper Silesian Coal Basin, Czech Republic, consists of coal-bearing paralic sediments underlain by marine deposits in a cyclothemic nature similar to those in the Pennsylvanian of Euramerica. The thickness of the formation exceeds 3000 m, in which >170 coals are identified in a foreland basin setting. Fifty-five genetic cycles are identified in the present study, using transgressional erosional surfaces as lower and upper boundaries. Terrestrial plant-macrofossil assemblages are preserved within each cycle, mostly associated with coals, and these represent a sampling of the coastal plain vegetation. New high-precision isotope dilution-thermal ionizationmore » mass spectrometry U-Pb ages on zircons from tonsteins of two coals provide chronometric constraints for the Serpukhovian. Unweighted Pair Group Method with Arithmetic Mean clustering and Bayesian statistical classification group macrofloral assemblages into four distinct stratigraphic clusters, with assemblages persisting for <18 cycles before compositional change. Cycle duration, based on Ludmila (328.84{+-}0.16 Ma) and Karel (328.01{+-}0.08 Ma) tonsteins, overlaps the short-period (100 kyr) eccentricity cycle at the 95% confidence interval. These dates push the beginning of the Serpukhovian several million years deeper in time. An estimate for the Visean-Serpukhovian boundary is proposed at similar to 330 Ma. Late Mississippian wetland ecosystems persisted for >1.8 million years before regional perturbation, extirpation, or extinction of taxa occurred. Significant changes in the composition of macrofloral clusters occur across major marine intervals.« less

  12. Frequency of depression in type 2 diabetes mellitus and an analysis of predictive factors.

    PubMed

    Arshad, Abdul Rehman; Alvi, Kamran Yousaf

    2016-04-01

    To determine frequency of depression in patients with diabetes mellitus type 2 and to identify predictive factors. The observational study was carried out at 1 Mountain Medical Battalion, Bagh, Azad Kashmir, Pakistan, from June 2013 to May 2014, and comprised type 2 diabetic patients who were not using anti-depressants and did not have history of other psychiatric illnesses. Demographic data, duration of diabetes, presence of hypertension and type of treatment were recorded and body mass index was calculated. Patient Health Questionnaire-9, translated into Urdu, was administered during face-to-face interviews. Scores >5 indicated depression, which was classified into different grades of severity using standard cut-off values. Of the 133 patients, 51(38.35%) were depressed. Depression was mild in 34(26%), moderate in 12(9.6%), moderately severe in 4(2.9%) and severe in 1(0.7%) patient. On univariate binary logistic regression, female gender (odds ratio=3.07; 95% confidence interval = 1.43, 6.59), lesser education (odds ratio = 0.90; 95% confidence interval 0.84, 0.97) shorter duration of diabetes (odds ratio=0.87; 95% confidence interval = 0.80, 0.96) and higher body mass index (odds ratio=1.41; 95% confidence interval = 1.05, 1.25) were significantly associated with depression. Only shorter duration of diabetes (odds ratio=0.90; 95% confidence interval = 0.82, 0.99) remained significant after adjustment for confounders. Age, level of education, glycaemic control and type of treatment did not predict depression. A significant proportion of type 2 diabetics were depressed. Shorter duration of diabetes reliably predicted depression in these patients.

  13. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  14. A prospective observational cohort study to assess the incidence of acute otitis media among children 0-5 years of age in Southern Brazil.

    PubMed

    Lanzieri, Tatiana M; Cunha, Clóvis Arns da; Cunha, Rejane B; Arguello, D Fermin; Devadiga, Raghavendra; Sanchez, Nervo; Barria, Eduardo Ortega

    To estimate acute otitis media incidence among young children and impact on quality of life of parents/caregivers in a southern Brazilian city. Prospective cohort study including children 0-5 years of age registered at a private pediatric practice. Acute otitis media episodes diagnosed by a pediatrician and impact on quality of life of parents/caregivers were assessed during a 12-month follow-up. During September 2008-March 2010, of 1,136 children enrolled in the study, 1074 (95%) were followed: 55.0% were ≤2 years of age, 52.3% males, 94.7% white, and 69.2% had previously received pneumococcal vaccine in private clinics. Acute otitis media incidence per 1000 person-years was 95.7 (95% confidence interval: 77.2-117.4) overall, 105.5 (95% confidence interval: 78.3-139.0) in children ≤2 years of age and 63.6 (95% confidence interval: 43.2-90.3) in children 3-5 years of age. Acute otitis media incidence per 1000 person-years was 86.3 (95% confidence interval: 65.5-111.5) and 117.1 (95% confidence interval: 80.1-165.3) among vaccinated and unvaccinated children, respectively. Nearly 68.9% of parents reported worsening of their overall quality of life. Acute otitis media incidence among unvaccinated children in our study may be useful as baseline data to assess impact of pneumococcal vaccine introduction in the Brazilian National Immunization Program in April 2010. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  15. Risk factors for gametocyte carriage in uncomplicated falciparum malaria in children before and after artemisinin-based combination treatments.

    PubMed

    Sowunmi, Akintunde; Okuboyejo, Titilope M; Gbotosho, Grace O; Happi, Christian T

    2011-01-01

    Artemisinin-based combination treatments (ACTs) are the recommended first-line antimalarials globally, but their influence on the risk factors associated with gametocyte carriage has had little evaluation in endemic areas. The risk factors associated with gametocytaemia at presentation and after ACTs were evaluated in 835 children assigned to artesunate, artesunate-amodiaquine, artesunate-mefloquine or artemether-lumefantrine. Gametocyte carriage at enrolment was 8.4%. During follow-up, 24 patients (2.8%) developed gametocytaemia, which in 83% (20 patients) had developed by day 7 following treatment. In a multiple regression model, 2 factors were independent risk factors for the presence of gametocytaemia at enrolment, namely age <3 years (adjusted odds ratio 2.03, 95% confidence interval 1.01-4.05; p = 0.04) and enrolment before 2009 (adjusted odds ratio 4.2, 95% confidence interval 2.09-8.44; p < 0.001). Haematocrit <25% and parasitaemia <50,000/μl blood were associated with an increased risk of gametocytaemia. Following treatment, 3 factors were independent risk factors for gametocytaemia, namely gametocytaemia at enrolment (adjusted odds ratio 46.39, 95% confidence interval 22.3-96.46; p < 0.0001) and treatment with artesunate (adjusted odds ratio 6.74, 95% confidence interval 1.79-25.27; p = 0.005) or artesunate-mefloquine (adjusted odds ratio 9.66, 95% confidence interval 2.87-32.46; p < 0.0.0001) relative to other ACTs. ACTs modified the risk factors associated with gametocyte carriage after use. Copyright © 2012 S. Karger AG, Basel.

  16. Weighted regression analysis and interval estimators

    Treesearch

    Donald W. Seegrist

    1974-01-01

    A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.

  17. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  18. Bayesian approach to the assessment of the population-specific risk of inhibitors in hemophilia A patients: a case study

    PubMed Central

    Cheng, Ji; Iorio, Alfonso; Marcucci, Maura; Romanov, Vadim; Pullenayegum, Eleanor M; Marshall, John K; Thabane, Lehana

    2016-01-01

    Background Developing inhibitors is a rare event during the treatment of hemophilia A. The multifacets and uncertainty surrounding the development of inhibitors further complicate the process of estimating inhibitor rate from the limited data. Bayesian statistical modeling provides a useful tool in generating, enhancing, and exploring the evidence through incorporating all the available information. Methods We built our Bayesian analysis using three study cases to estimate the inhibitor rates of patients with hemophilia A in three different scenarios: Case 1, a single cohort of previously treated patients (PTPs) or previously untreated patients; Case 2, a meta-analysis of PTP cohorts; and Case 3, a previously unexplored patient population – patients with baseline low-titer inhibitor or history of inhibitor development. The data used in this study were extracted from three published ADVATE (antihemophilic factor [recombinant] is a product of Baxter for treating hemophilia A) post-authorization surveillance studies. Noninformative and informative priors were applied to Bayesian standard (Case 1) or random-effects (Case 2 and Case 3) logistic models. Bayesian probabilities of satisfying three meaningful thresholds of the risk of developing a clinical significant inhibitor (10/100, 5/100 [high rates], and 1/86 [the Food and Drug Administration mandated cutoff rate in PTPs]) were calculated. The effect of discounting prior information or scaling up the study data was evaluated. Results Results based on noninformative priors were similar to the classical approach. Using priors from PTPs lowered the point estimate and narrowed the 95% credible intervals (Case 1: from 1.3 [0.5, 2.7] to 0.8 [0.5, 1.1]; Case 2: from 1.9 [0.6, 6.0] to 0.8 [0.5, 1.1]; Case 3: 2.3 [0.5, 6.8] to 0.7 [0.5, 1.1]). All probabilities of satisfying a threshold of 1/86 were above 0.65. Increasing the number of patients by two and ten times substantially narrowed the credible intervals for the single cohort study (1.4 [0.7, 2.3] and 1.4 [1.1, 1.8], respectively). Increasing the number of studies by two and ten times for the multiple study scenarios (Case 2: 1.9 [0.6, 4.0] and 1.9 [1.5, 2.6]; Case 3: 2.4 [0.9, 5.0] and 2.6 [1.9, 3.5], respectively) had a similar effect. Conclusion Bayesian approach as a robust, transparent, and reproducible analytic method can be efficiently used to estimate the inhibitor rate of hemophilia A in complex clinical settings. PMID:27822129

  19. Bayesian approach to the assessment of the population-specific risk of inhibitors in hemophilia A patients: a case study.

    PubMed

    Cheng, Ji; Iorio, Alfonso; Marcucci, Maura; Romanov, Vadim; Pullenayegum, Eleanor M; Marshall, John K; Thabane, Lehana

    2016-01-01

    Developing inhibitors is a rare event during the treatment of hemophilia A. The multifacets and uncertainty surrounding the development of inhibitors further complicate the process of estimating inhibitor rate from the limited data. Bayesian statistical modeling provides a useful tool in generating, enhancing, and exploring the evidence through incorporating all the available information. We built our Bayesian analysis using three study cases to estimate the inhibitor rates of patients with hemophilia A in three different scenarios: Case 1, a single cohort of previously treated patients (PTPs) or previously untreated patients; Case 2, a meta-analysis of PTP cohorts; and Case 3, a previously unexplored patient population - patients with baseline low-titer inhibitor or history of inhibitor development. The data used in this study were extracted from three published ADVATE (antihemophilic factor [recombinant] is a product of Baxter for treating hemophilia A) post-authorization surveillance studies. Noninformative and informative priors were applied to Bayesian standard (Case 1) or random-effects (Case 2 and Case 3) logistic models. Bayesian probabilities of satisfying three meaningful thresholds of the risk of developing a clinical significant inhibitor (10/100, 5/100 [high rates], and 1/86 [the Food and Drug Administration mandated cutoff rate in PTPs]) were calculated. The effect of discounting prior information or scaling up the study data was evaluated. Results based on noninformative priors were similar to the classical approach. Using priors from PTPs lowered the point estimate and narrowed the 95% credible intervals (Case 1: from 1.3 [0.5, 2.7] to 0.8 [0.5, 1.1]; Case 2: from 1.9 [0.6, 6.0] to 0.8 [0.5, 1.1]; Case 3: 2.3 [0.5, 6.8] to 0.7 [0.5, 1.1]). All probabilities of satisfying a threshold of 1/86 were above 0.65. Increasing the number of patients by two and ten times substantially narrowed the credible intervals for the single cohort study (1.4 [0.7, 2.3] and 1.4 [1.1, 1.8], respectively). Increasing the number of studies by two and ten times for the multiple study scenarios (Case 2: 1.9 [0.6, 4.0] and 1.9 [1.5, 2.6]; Case 3: 2.4 [0.9, 5.0] and 2.6 [1.9, 3.5], respectively) had a similar effect. Bayesian approach as a robust, transparent, and reproducible analytic method can be efficiently used to estimate the inhibitor rate of hemophilia A in complex clinical settings.

  20. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. Influences of the Tamarisk Leaf Beetle (Diorhabda carinulata) on the diet of insectivorous birds along the Dolores River in Southwestern Colorado

    USGS Publications Warehouse

    Puckett, Sarah L.; van Riper, Charles

    2014-01-01

    We examined the effects of a biologic control agent, the tamarisk leaf beetle (Diorhabda carinulata), on native avifauna in southwestern Colorado, specifically, addressing whether and to what degree birds eat tamarisk leaf beetles. In 2010, we documented avian foraging behavior, characterized the arthropod community, sampled bird diets, and undertook an experiment to determine whether tamarisk leaf beetles are palatable to birds. We observed that tamarisk leaf beetles compose 24.0 percent (95-percent-confidence interval, 19.9-27.4 percent) and 35.4 percent (95-percent-confidence interval, 32.4-45.1 percent) of arthropod abundance and biomass in the study area, respectively. Birds ate few tamarisk leaf beetles, despite a superabundance of D. carinulata in the environment. The frequency of occurrence of tamarisk leaf beetles in bird diets was 2.1 percent (95-percent-confidence interval, 1.3- 2.9 percent) by abundance and 3.4 percent (95-percent-confidence interval, 2.6-4.2 percent) by biomass. Thus, tamarisk leaf beetles probably do not contribute significantly to the diets of birds in areas where biologic control of tamarisk is being applied.

  2. Post-traumatic stress disorder in adolescents after a hurricane.

    PubMed

    Garrison, C Z; Weinrich, M W; Hardin, S B; Weinrich, S; Wang, L

    1993-10-01

    A school-based study conducted in 1990, 1 year after Hurricane Hugo, investigated the frequency and correlates of post-traumatic stress disorder (PTSD) in 1,264 adolescents aged 11-17 years residing in selected South Carolina communities. Data were collected via a 174-item self-administered questionnaire that included a PTSD symptom scale. A computer algorithm that applied decision rules of the Diagnostic and Statistical Manual of Mental Disorders, Third Edition, Revised to the symptoms reported was used to assign a diagnosis of PTSD and to designate the number of individuals who met the reexperiencing (20%), avoidance (9%), and arousal (18%) criteria. Rates of PTSD were lowest in black males (1.5%) and higher, but similar, in the remaining groups (3.8-6.2%). Results from a multivariable logistic model indicated that exposure to the hurricane (odds ratio (OR) = 1.26, 95% confidence interval 1.13-1.41), experiencing other violent traumatic events (OR = 2.46, 95% confidence interval 1.75-3.44), being white (OR = 2.03, 95% confidence interval 1.12-3.69) and being female (OR = 2.17, 95% confidence interval 1.15-4.10) were significant correlates of PTSD.

  3. Bendectin and human congenital malformations.

    PubMed

    Shiono, P H; Klebanoff, M A

    1989-08-01

    The relationship between Bendectin exposure during the first trimester of pregnancy and the occurrence of congenital malformations was prospectively studied in 31,564 newborns registered in the Northern California Kaiser Permanente Birth Defects Study. The odds ratio for any major malformation and Bendectin use was 1.0 (95% confidence interval 0.8-1.4). There were 58 categories of congenital malformations; three of them were statistically associated with Bendectin exposure (microcephaly--odds ratio = 5.3, 95% confidence interval = 1.8-15.6; congenital cataract--odds ratio = 5.3, 95% confidence interval = 1.2-24.3; lung malformations (ICD-8 codes 484.4-484.8)--odds ratio = 4.6, 95% confidence interval = 1.9-10.9). This is exactly the number of associations that would be expected by chance. An independent study (the Collaborative Perinatal Project) was used to determine whether vomiting during pregnancy in the absence of Bendectin use was associated with these three malformations. Two of the three (microcephaly and cataract) had strong positive associations with vomiting in the absence of Bendectin use. We conclude that there is no increase in the overall rate of major malformations after exposure to Bendectin and that the three associations found between Bendectin and individual malformations are unlikely to be causal.

  4. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  5. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  6. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.

  7. Bayesian Optimal Interval Design: A Simple and Well-Performing Design for Phase I Oncology Trials.

    PubMed

    Yuan, Ying; Hess, Kenneth R; Hilsenbeck, Susan G; Gilbert, Mark R

    2016-09-01

    Despite more than two decades of publications that offer more innovative model-based designs, the classical 3 + 3 design remains the most dominant phase I trial design in practice. In this article, we introduce a new trial design, the Bayesian optimal interval (BOIN) design. The BOIN design is easy to implement in a way similar to the 3 + 3 design, but is more flexible for choosing the target toxicity rate and cohort size and yields a substantially better performance that is comparable with that of more complex model-based designs. The BOIN design contains the 3 + 3 design and the accelerated titration design as special cases, thus linking it to established phase I approaches. A numerical study shows that the BOIN design generally outperforms the 3 + 3 design and the modified toxicity probability interval (mTPI) design. The BOIN design is more likely than the 3 + 3 design to correctly select the MTD and allocate more patients to the MTD. Compared with the mTPI design, the BOIN design has a substantially lower risk of overdosing patients and generally a higher probability of correctly selecting the MTD. User-friendly software is freely available to facilitate the application of the BOIN design. Clin Cancer Res; 22(17); 4291-301. ©2016 AACR. ©2016 American Association for Cancer Research.

  8. Associations between maternal periconceptional exposure to secondhand tobacco smoke and major birth defects.

    PubMed

    Hoyt, Adrienne T; Canfield, Mark A; Romitti, Paul A; Botto, Lorenzo D; Anderka, Marlene T; Krikov, Sergey V; Tarpey, Morgan K; Feldkamp, Marcia L

    2016-11-01

    While associations between secondhand smoke and a few birth defects (namely, oral clefts and neural tube defects) have been noted in the scientific literature, to our knowledge, there is no single or comprehensive source of population-based information on its associations with a range of birth defects among nonsmoking mothers. We utilized data from the National Birth Defects Prevention Study, a large population-based multisite case-control study, to examine associations between maternal reports of periconceptional exposure to secondhand smoke in the household or workplace/school and major birth defects. The multisite National Birth Defects Prevention Study is the largest case-control study of birth defects to date in the United States. We selected cases from birth defect groups having >100 total cases, as well as all nonmalformed controls (10,200), from delivery years 1997 through 2009; 44 birth defects were examined. After excluding cases and controls from multiple births and whose mothers reported active smoking or pregestational diabetes, we analyzed data on periconceptional secondhand smoke exposure-encompassing the period 1 month prior to conception through the first trimester. For the birth defect craniosynostosis, we additionally examined the effect of exposure in the second and third trimesters as well due to the potential sensitivity to teratogens for this defect throughout pregnancy. Covariates included in all final models of birth defects with ≥5 exposed mothers were study site, previous live births, time between estimated date of delivery and interview date, maternal age at estimated date of delivery, race/ethnicity, education, body mass index, nativity, household income divided by number of people supported by this income, periconceptional alcohol consumption, and folic acid supplementation. For each birth defect examined, we used logistic regression analyses to estimate both crude and adjusted odds ratios and 95% confidence intervals for both isolated and total case groups for various sources of exposure (household only; workplace/school only; household and workplace/school; household or workplace/school). The prevalence of secondhand smoke exposure only across all sources ranged from 12.9-27.8% for cases and 14.5-15.8% for controls. The adjusted odds ratios for any vs no secondhand smoke exposure in the household or workplace/school and isolated birth defects were significantly elevated for neural tube defects (anencephaly: adjusted odds ratio, 1.66; 95% confidence interval, 1.22-2.25; and spina bifida: adjusted odds ratio, 1.49; 95% confidence interval, 1.20-1.86); orofacial clefts (cleft lip without cleft palate: adjusted odds ratio, 1.41; 95% confidence interval, 1.10-1.81; cleft lip with or without cleft palate: adjusted odds ratio, 1.24; 95% confidence interval, 1.05-1.46; cleft palate alone: adjusted odds ratio, 1.31; 95% confidence interval, 1.06-1.63); bilateral renal agenesis (adjusted odds ratio, 1.99; 95% confidence interval, 1.05-3.75); amniotic band syndrome-limb body wall complex (adjusted odds ratio, 1.66; 95% confidence interval, 1.10-2.51); and atrial septal defects, secundum (adjusted odds ratio, 1.37; 95% confidence interval, 1.09-1.72). There were no significant inverse associations observed. Additional studies replicating the findings are needed to better understand the moderate positive associations observed between periconceptional secondhand smoke and several birth defects in this analysis. Increased odds ratios resulting from chance (eg, multiple comparisons) or recall bias cannot be ruled out. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. New estimates of elasticity of demand for healthcare in rural China.

    PubMed

    Zhou, Zhongliang; Su, Yanfang; Gao, Jianmin; Xu, Ling; Zhang, Yaoguang

    2011-12-01

    Only limited empirical studies reported own-price elasticity of demand for health care in rural China. Neither research on income elasticity of demand for health care nor cross-price elasticity of demand for inpatient versus outpatient services in rural China has been reported. However, elasticity of demand is informative to evaluate current policy and to guide further policy making. Our study contributes to the literature by estimating three elasticities (i.e., own-price elasticity, cross-price elasticity, and income elasticity of demand for health care based on nationwide-representative data. We aim to answer three empirical questions with regard to health expenditure in rural China: (1) Which service is more sensitive to price change, outpatient or inpatient service? (2) Is outpatient service a substitute or complement to inpatient service? and (3) Does demand for inpatient services grow faster than demand for outpatient services with income growth? Based on data from a National Health Services Survey, a Probit regression model with probability of outpatient visit and probability of inpatient visit as dependent variables and a zero-truncated negative binomial regression model with outpatient visits as dependent variable were constructed to isolate the effects of price and income on demand for health care. Both pooled and separated regressions for 2003 and 2008 were conducted with tests of robustness. Own-price elasticities of demand for first outpatient visit, outpatient visits among users and first inpatient visit are -0.519 [95% confidence interval (-0.703, -0.336)], -0.547 [95% confidence interval (-0.747, -0.347)] and -0.372 [95% confidence interval (-0.517, -0.226)], respectively. Cross-price elasticities of demand for first outpatient visit, outpatient visits among users and first inpatient visit are 0.073 [95% confidence interval (-0.176, 0.322)], 0.308 [95% confidence interval (0.087, 0.528)], and 0.059 [95% confidence interval (-0.085, 0.204)], respectively. Income elasticities of demand for first outpatient visit, outpatient visits among users and first inpatient visit are 0.098 [95% confidence interval (0.018, 0.178)], 0.136 [95% confidence interval (0.028, 0.245)] and 0.521 [95% confidence interval (0.438, 0.605)], respectively. The aforementioned results are in 2008, which hold similar pattern as results in 2003 as well as results from pooled data of two periods. First, no significant difference is detected between sensitivity of outpatient services and sensitivity of inpatient services, responding to own-price change. Second, inpatient services are substitutes to outpatient services. Third, the growth of inpatient services is faster than the growth in outpatient services in response to income growth. The major findings from this paper suggest refining insurance policy in rural China. First, from a cost-effectiveness perspective, changing outpatient price is at least as effective as changing inpatient price to adjust demand of health care. Second, the current national guideline of healthcare reform to increase the reimbursement rate for inpatient services will crowd out outpatient services; however, we have no evidence about the change in demand for inpatient service if insurance covers outpatient services. Third, a referral system and gate-keeping system should be established to guide rural patients to utilize outpatient service. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Confidence limit calculation for antidotal potency ratio derived from lethal dose 50

    PubMed Central

    Manage, Ananda; Petrikovics, Ilona

    2013-01-01

    AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618

  11. Differentiating Wheat Genotypes by Bayesian Hierarchical Nonlinear Mixed Modeling of Wheat Root Density

    PubMed Central

    Wasson, Anton P.; Chiu, Grace S.; Zwart, Alexander B.; Binns, Timothy R.

    2017-01-01

    Ensuring future food security for a growing population while climate change and urban sprawl put pressure on agricultural land will require sustainable intensification of current farming practices. For the crop breeder this means producing higher crop yields with less resources due to greater environmental stresses. While easy gains in crop yield have been made mostly “above ground,” little progress has been made “below ground”; and yet it is these root system traits that can improve productivity and resistance to drought stress. Wheat pre-breeders use soil coring and core-break counts to phenotype root architecture traits, with data collected on rooting density for hundreds of genotypes in small increments of depth. The measured densities are both large datasets and highly variable even within the same genotype, hence, any rigorous, comprehensive statistical analysis of such complex field data would be technically challenging. Traditionally, most attributes of the field data are therefore discarded in favor of simple numerical summary descriptors which retain much of the high variability exhibited by the raw data. This poses practical challenges: although plant scientists have established that root traits do drive resource capture in crops, traits that are more randomly (rather than genetically) determined are difficult to breed for. In this paper we develop a hierarchical nonlinear mixed modeling approach that utilizes the complete field data for wheat genotypes to fit, under the Bayesian paradigm, an “idealized” relative intensity function for the root distribution over depth. Our approach was used to determine heritability: how much of the variation between field samples was purely random vs. being mechanistically driven by the plant genetics? Based on the genotypic intensity functions, the overall heritability estimate was 0.62 (95% Bayesian confidence interval was 0.52 to 0.71). Despite root count profiles that were statistically very noisy, our approach led to denoised profiles which exhibited rigorously discernible phenotypic traits. Profile-specific traits could be representative of a genotype, and thus, used as a quantitative tool to associate phenotypic traits with specific genotypes. This would allow breeders to select for whole root system distributions appropriate for sustainable intensification, and inform policy for mitigating crop yield risk and food insecurity. PMID:28303148

  12. Patterns of gestational weight gain and birthweight outcomes in the Eunice Kennedy Shriver National Institute of Child Health and Human Development Fetal Growth Studies-Singletons: a prospective study.

    PubMed

    Pugh, Sarah J; Albert, Paul S; Kim, Sungduk; Grobman, William; Hinkle, Stefanie N; Newman, Roger B; Wing, Deborah A; Grantz, Katherine L

    2017-09-01

    Inadequate or excessive total gestational weight gain is associated with increased risks of small- and large-for-gestational-age births, respectively, but evidence is sparse regarding overall and trimester-specific patterns of gestational weight gain in relation to these risks. Characterizing the interrelationship between patterns of gestational weight gain across trimesters can reveal whether the trajectory of gestational weight gain in the first trimester sets the path for gestational weight gain in subsequent trimesters, thereby serving as an early marker for at-risk pregnancies. We sought to describe overall trajectories of gestational weight gain across gestation and assess the risk of adverse birthweight outcomes associated with the overall trajectory and whether the timing of gestational weight gain (first vs second/third trimester) is differentially associated with adverse outcomes. We conducted a secondary analysis of a prospective cohort of 2802 singleton pregnancies from 12 US prenatal centers (2009 through 2013). Small and large for gestational age were calculated using sex-specific birthweight references <5th, <10th, or ≥90th percentiles, respectively. At each of the research visits, women's weight was measured following a standardized anthropometric protocol. Maternal weight at antenatal clinical visits was also abstracted from the prenatal records. Semiparametric, group-based, latent class, trajectory models estimated overall gestational weight gain and separate first- and second-/third-trimester trajectories to assess tracking. Robust Poisson regression was used to estimate the relative risk of small- and large-for-gestational-age outcomes by the probability of trajectory membership. We tested whether relationships were modified by prepregnancy body mass index. There were 2779 women with a mean of 15 (SD 5) weights measured across gestation. Four distinct gestational weight gain trajectories were identified based on the lowest Bayesian information criterion value, classifying 10.0%, 41.8%, 39.2%, and 9.0% of the population from lowest to highest weight gain trajectories, with an inflection at 14 weeks. The average rate in each trajectory group from lowest to highest for 0-<14 weeks was -0.20, 0.04, 0.21, and 0.52 kg/wk and for 14-39 weeks was 0.29, 0.48, 0.63, and 0.79 kg/wk, respectively; the second lowest gaining trajectory resembled the Institute of Medicine recommendations and was designated as the reference with the other trajectories classified as low, moderate-high, or high. Accuracy of assignment was assessed and found to be high (median posterior probability 0.99, interquartile range 0.99-1.00). Compared with the referent trajectory, a low overall trajectory, but not other trajectories, was associated with a 1.55-fold (95% confidence interval, 1.06-2.25) and 1.58-fold (95% confidence interval, 0.88-2.82) increased risk of small-for-gestational-age <10th and <5th, respectively, while a moderate-high and high trajectory were associated with a 1.78-fold (95% confidence interval, 1.31-2.41) and 2.45-fold (95% confidence interval, 1.66-3.61) increased risk of large for gestational age, respectively. In a separate analysis investigating whether early (<14 weeks) gestational weight gain tracked with later (≥14 weeks) gestational weight gain, only 49% (n = 127) of women in the low first-trimester trajectory group continued as low in the second/third trimester, and had a 1.59-fold increased risk of small for gestational age; for the other 51% (n = 129) of women without a subsequently low second-/third-trimester gestational weight gain trajectory, there was no increased risk of small for gestational age (relative risk, 0.75; 95% confidence interval, 0.47-1.38). Prepregnancy body mass index did not modify the association between gestational weight gain trajectory and small for gestational age (P = 0.52) or large for gestational age (P = .69). Our findings are reassuring for women who experience weight loss or excessive weight gain in the first trimester; however, the risk of small or large for gestational age is significantly increased if women gain weight below or above the reference trajectory in the second/third trimester. Published by Elsevier Inc.

  13. Pulmonary disease in cystic fibrosis: assessment with chest CT at chest radiography dose levels.

    PubMed

    Ernst, Caroline W; Basten, Ines A; Ilsen, Bart; Buls, Nico; Van Gompel, Gert; De Wachter, Elke; Nieboer, Koenraad H; Verhelle, Filip; Malfroot, Anne; Coomans, Danny; De Maeseneer, Michel; de Mey, Johan

    2014-11-01

    To investigate a computed tomographic (CT) protocol with iterative reconstruction at conventional radiography dose levels for the assessment of structural lung abnormalities in patients with cystic fibrosis ( CF cystic fibrosis ). In this institutional review board-approved study, 38 patients with CF cystic fibrosis (age range, 6-58 years; 21 patients <18 years and 17 patients >18 years) underwent investigative CT (at minimal exposure settings combined with iterative reconstruction) as a replacement of yearly follow-up posteroanterior chest radiography. Verbal informed consent was obtained from all patients or their parents. CT images were randomized and rated independently by two radiologists with use of the Bhalla scoring system. In addition, mosaic perfusion was evaluated. As reference, the previous available conventional chest CT scan was used. Differences in Bhalla scores were assessed with the χ(2) test and intraclass correlation coefficients ( ICC intraclass correlation coefficient s). Radiation doses for CT and radiography were assessed for adults (>18 years) and children (<18 years) separately by using technical dose descriptors and estimated effective dose. Differences in dose were assessed with the Mann-Whitney U test. The median effective dose for the investigative protocol was 0.04 mSv (95% confidence interval [ CI confidence interval ]: 0.034 mSv, 0.10 mSv) for children and 0.05 mSv (95% CI confidence interval : 0.04 mSv, 0.08 mSv) for adults. These doses were much lower than those with conventional CT (median: 0.52 mSv [95% CI confidence interval : 0.31 mSv, 3.90 mSv] for children and 1.12 mSv [95% CI confidence interval : 0.57 mSv, 3.15 mSv] for adults) and of the same order of magnitude as those for conventional radiography (median: 0.012 mSv [95% CI confidence interval : 0.006 mSv, 0.022 mSv] for children and 0.012 mSv [95% CI confidence interval : 0.005 mSv, 0.031 mSv] for adults). All images were rated at least as diagnostically acceptable. Very good agreement was found in overall Bhalla score ( ICC intraclass correlation coefficient , 0.96) with regard to the severity of bronchiectasis ( ICC intraclass correlation coefficient , 0.87) and sacculations and abscesses ( ICC intraclass correlation coefficient , 0.84). Interobserver agreement was excellent ( ICC intraclass correlation coefficient , 0.86-1). For patients with CF cystic fibrosis , a dedicated chest CT protocol can replace the two yearly follow-up chest radiographic examinations without major dose penalty and with similar diagnostic quality compared with conventional CT.

  14. Nondepressive Psychosocial Factors and CKD Outcomes in Black Americans.

    PubMed

    Lunyera, Joseph; Davenport, Clemontina A; Bhavsar, Nrupen A; Sims, Mario; Scialla, Julia; Pendergast, Jane; Hall, Rasheeda; Tyson, Crystal C; Russell, Jennifer St Clair; Wang, Wei; Correa, Adolfo; Boulware, L Ebony; Diamantidis, Clarissa J

    2018-02-07

    Established risk factors for CKD do not fully account for risk of CKD in black Americans. We studied the association of nondepressive psychosocial factors with risk of CKD in the Jackson Heart Study. We used principal component analysis to identify underlying constructs from 12 psychosocial baseline variables (perceived daily, lifetime, and burden of lifetime discrimination; stress; anger in; anger out; hostility; pessimism; John Henryism; spirituality; perceived social status; and social support). Using multivariable models adjusted for demographics and comorbidity, we examined the association of psychosocial variables with baseline CKD prevalence, eGFR decline, and incident CKD during follow-up. Of 3390 (64%) Jackson Heart Study participants with the required data, 656 (19%) had prevalent CKD. Those with CKD (versus no CKD) had lower perceived daily (mean [SD] score =7.6 [8.5] versus 9.7 [9.0]) and lifetime discrimination (2.5 [2.0] versus 3.1 [2.2]), lower perceived stress (4.2 [4.0] versus 5.2 [4.4]), higher hostility (12.1 [5.2] versus 11.5 [4.8]), higher John Henryism (30.0 [4.8] versus 29.7 [4.4]), and higher pessimism (2.3 [2.2] versus 2.0 [2.1]; all P <0.05). Principal component analysis identified three factors from the 12 psychosocial variables: factor 1, life stressors (perceived discrimination, stress); factor 2, moods (anger, hostility); and, factor 3, coping strategies (John Henryism, spirituality, social status, social support). After adjustments, factor 1 (life stressors) was negatively associated with prevalent CKD at baseline among women only: odds ratio, 0.76 (95% confidence interval, 0.65 to 0.89). After a median follow-up of 8 years, identified psychosocial factors were not significantly associated with eGFR decline (life stressors: β =0.08; 95% confidence interval, -0.02 to 0.17; moods: β =0.03; 95% confidence interval, -0.06 to 0.13; coping: β =-0.02; 95% confidence interval, -0.12 to 0.08) or incident CKD (life stressors: odds ratio, 1.07; 95% confidence interval, 0.88 to 1.29; moods: odds ratio, 1.02; 95% confidence interval, 0.84 to 1.24; coping: odds ratio, 0.91; 95% confidence interval, 0.75 to 1.11). Greater life stressors were associated with lower prevalence of CKD at baseline in the Jackson Heart Study. However, psychosocial factors were not associated with risk of CKD over a median follow-up of 8 years. This article contains a podcast at https://www.asn-online.org/media/podcast/CJASN/2018_01_03_CJASNPodcast_18_2_L.mp3. Copyright © 2018 by the American Society of Nephrology.

  15. Bayesian posterior distributions without Markov chains.

    PubMed

    Cole, Stephen R; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B

    2012-03-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976-1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984-1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.

  16. Scheduling viability tests for seeds in long-term storage based on a Bayesian Multi-Level Model

    USDA-ARS?s Scientific Manuscript database

    Genebank managers conduct viability tests on stored seeds so they can replace lots that have viability near a critical threshold, such as 50 or 85% germination. Currently, these tests are typically scheduled at uniform intervals; testing every 5 years is common. A manager needs to balance the cost...

  17. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  18. The bacterial meningitis score to distinguish bacterial from aseptic meningitis in children from Sao Paulo, Brazil.

    PubMed

    Mekitarian Filho, Eduardo; Horita, Sérgio Massaru; Gilio, Alfredo Elias; Alves, Anna Cláudia Dominguez; Nigrovic, Lise E

    2013-09-01

    In a retrospective cohort of 494 children with meningitis in Sao Paulo, Brazil, the Bacterial Meningitis Score identified all the children with bacterial meningitis (sensitivity 100%, 95% confidence interval: 92-100% and negative predictive value 100%, 95% confidence interval: 98-100%). Addition of cerebrospinal fluid lactate to the score did not improve clinical prediction rule performance.

  19. Spacecraft utility and the development of confidence intervals for criticality of anomalies

    NASA Technical Reports Server (NTRS)

    Williams, R. E.

    1980-01-01

    The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.

  20. Prevalence Estimates of Complicated Syphilis.

    PubMed

    Dombrowski, Julia C; Pedersen, Rolf; Marra, Christina M; Kerani, Roxanne P; Golden, Matthew R

    2015-12-01

    We reviewed 68 cases of possible neurosyphilis among 573 syphilis cases in King County, WA, from 3rd January 2012 to 30th September 2013; 7.9% (95% confidence interval, 5.8%-10.5%) had vision or hearing changes, and 3.5% (95% confidence interval, 2.2%-5.4%) had both symptoms and objective confirmation of complicated syphilis with either abnormal cerebrospinal fluid or an abnormal ophthalmologic examination.

  1. Speech and language adverse effects after thalamotomy and deep brain stimulation in patients with movement disorders: A meta-analysis.

    PubMed

    Alomar, Soha; King, Nicolas K K; Tam, Joseph; Bari, Ausaf A; Hamani, Clement; Lozano, Andres M

    2017-01-01

    The thalamus has been a surgical target for the treatment of various movement disorders. Commonly used therapeutic modalities include ablative and nonablative procedures. A major clinical side effect of thalamic surgery is the appearance of speech problems. This review summarizes the data on the development of speech problems after thalamic surgery. A systematic review and meta-analysis was performed using nine databases, including Medline, Web of Science, and Cochrane Library. We also checked for articles by searching citing and cited articles. We retrieved studies between 1960 and September 2014. Of a total of 2,320 patients, 19.8% (confidence interval: 14.8-25.9) had speech difficulty after thalamotomy. Speech difficulty occurred in 15% (confidence interval: 9.8-22.2) of those treated with a unilaterally and 40.6% (confidence interval: 29.5-52.8) of those treated bilaterally. Speech impairment was noticed 2- to 3-fold more commonly after left-sided procedures (40.7% vs. 15.2%). Of the 572 patients that underwent DBS, 19.4% (confidence interval: 13.1-27.8) experienced speech difficulty. Subgroup analysis revealed that this complication occurs in 10.2% (confidence interval: 7.4-13.9) of patients treated unilaterally and 34.6% (confidence interval: 21.6-50.4) treated bilaterally. After thalamotomy, the risk was higher in Parkinson's patients compared to patients with essential tremor: 19.8% versus 4.5% in the unilateral group and 42.5% versus 13.9% in the bilateral group. After DBS, this rate was higher in essential tremor patients. Both lesioning and stimulation thalamic surgery produce adverse effects on speech. Left-sided and bilateral procedures are approximately 3-fold more likely to cause speech difficulty. This effect was higher after thalamotomy compared to DBS. In the thalamotomy group, the risk was higher in Parkinson's patients, whereas in the DBS group it was higher in patients with essential tremor. Understanding the pathophysiology of speech disturbance after thalamic procedures is a priority. © 2017 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  2. Outcomes after helicopter versus ground emergency medical services for major trauma--propensity score and instrumental variable analyses: a retrospective nationwide cohort study.

    PubMed

    Tsuchiya, Asuka; Tsutsumi, Yusuke; Yasunaga, Hideo

    2016-11-29

    Because of a lack of randomized controlled trials and the methodological weakness of currently available observational studies, the benefits of helicopter emergency medical services (HEMS) over ground emergency medical services (GEMS) for major trauma patients remain uncertain. The aim of this retrospective nationwide cohort study was to compare the mortality of adults with serious traumatic injuries who were transported by HEMS and GEMS, and to analyze the effects of HEMS in various subpopulations. Using the Japan Trauma Data Bank, we evaluated all adult patients who had an injury severity score ≥ 16 transported by HEMS or GEMS during the daytime between 2004 and 2014. We compared in-hospital mortality between patients transported by HEMS and GEMS using propensity score matching, inverse probability of treatment weighting and instrumental variable analyses to adjust for measured and unmeasured confounding factors. Eligible patients (n = 21,286) from 192 hospitals included 4128 transported by HEMS and 17,158 transported by GEMS. In the propensity score-matched model, there was a significant difference in the in-hospital mortality between HEMS and GEMS groups (22.2 vs. 24.5%, risk difference -2.3% [95% confidence interval, -4.2 to -0.5]; number needed to treat, 43 [95% confidence interval, 24 to 220]). The inverse probability of treatment weighting (20.8% vs. 23.9%; risk difference, -3.9% [95% confidence interval, -5.7 to -2.1]; number needed to treat, 26 [95% confidence interval, 17 to 48]) and instrumental variable analyses showed similar results (risk difference, -6.5% [95% confidence interval, -9.2 to -3.8]; number needed to treat, 15 [95% confidence interval, 11 to 27]). HEMS transport was significantly associated with lower in-hospital mortality after falls, compression injuries, severe chest injuries, extremity (including pelvic) injuries, and traumatic arrest on arrival to the emergency department. HEMS was associated with a significantly lower mortality than GEMS in adult patients with major traumatic injuries after adjusting for measured and unmeasured confounders.

  3. Determinants of waterpipe use amongst adolescents in Northern Sweden: a survey of use pattern, risk perception, and environmental factors.

    PubMed

    Ramji, Rathi; Arnetz, Judy; Nilsson, Maria; Jamil, Hikmet; Norström, Fredrik; Maziak, Wasim; Wiklund, Ywonne; Arnetz, Bengt

    2015-09-15

    Determinants of waterpipe use in adolescents are believed to differ from those for other tobacco products, but there is a lack of studies of possible social, cultural, or psychological aspects of waterpipe use in this population. This study applied a socioecological model to explore waterpipe use, and its relationship to other tobacco use in Swedish adolescents. A total of 106 adolescents who attended an urban high-school in northern Sweden responded to an anonymous questionnaire. Prevalence rates for waterpipe use were examined in relation to socio-demographics, peer pressure, sensation seeking behavior, harm perception, environmental factors, and depression. Thirty-three percent reported ever having smoked waterpipe (ever use), with 30% having done so during the last 30 days (current use). Among waterpipe ever users, 60% had ever smoked cigarettes in comparison to 32% of non-waterpipe smokers (95% confidence interval 1.4-7.9). The odds of having ever smoked waterpipe were three times higher among male high school seniors as well as students with lower grades. Waterpipe ever users had three times higher odds of having higher levels of sensation-seeking (95% confidence interval 1.2-9.5) and scored high on the depression scales (95% confidence interval 1.6-6.8) than non-users. The odds of waterpipe ever use were four times higher for those who perceived waterpipe products to have pleasant smell compared to cigarettes (95% confidence interval 1.7-9.8). Waterpipe ever users were twice as likely to have seen waterpipe use on television compared to non-users (95% confidence interval 1.1-5.7). The odds of having friends who smoked regularly was eight times higher for waterpipe ever users than non-users (95% confidence interval 2.1-31.2). The current study reports a high use of waterpipe in a select group of students in northern Sweden. The study adds the importance of looking at socioecological determinants of use, including peer pressure and exposure to media marketing, as well as mental health among users.

  4. To be involved or not to be involved: a survey of public preferences for self-involvement in decision-making involving mental capacity (competency) within Europe.

    PubMed

    Daveson, Barbara A; Bausewein, Claudia; Murtagh, Fliss E M; Calanzani, Natalia; Higginson, Irene J; Harding, Richard; Cohen, Joachim; Simon, Steffen T; Deliens, Luc; Bechinger-English, Dorothee; Hall, Sue; Koffman, Jonathan; Ferreira, Pedro Lopes; Toscani, Franco; Gysels, Marjolein; Ceulemans, Lucas; Haugen, Dagny F; Gomes, Barbara

    2013-05-01

    The Council of Europe has recommended that member states of European Union encourage their citizens to make decisions about their healthcare before they lose capacity to do so. However, it is unclear whether the public wants to make such decisions beforehand. To examine public preferences for self-involvement in end-of-life care decision-making and identify associated factors. A population-based survey with 9344 adults in England, Belgium, Germany, Italy, the Netherlands, Portugal and Spain. Across countries, 74% preferred self-involvement when capable; 44% preferred self-involvement when incapable through, for example, a living will. Four factors were associated with a preference for self-involvement across capacity and incapacity scenarios, respectively: higher educational attainment ((odds ratio = 1.93-2.77), (odds ratio = 1.33-1.80)); female gender ((odds ratio = 1.27, 95% confidence interval = 1.14-1.41), (odds ratio = 1.30, 95% confidence interval = 1.20-1.42)); younger-middle age ((30-59 years: odds ratio = 1.24-1.40), (50-59 years: odds ratio = 1.23, 95% confidence interval = 1.04-1.46)) and valuing quality over quantity of life or valuing both equally ((odds ratio = 1.49-1.58), (odds ratio = 1.35-1.53)). Those with increased financial hardship (odds ratio = 0.64-0.83) and a preference to die in hospital (not a palliative care unit) (odds ratio = 0.73, 95% confidence interval = 0.60-0.88), a nursing home or residential care (odds ratio = 0.73, 95% confidence interval = 0.54-0.99) were less likely to prefer self-involvement when capable. For the incapacity scenario, single people were more likely to prefer self-involvement (odds ratio = 1.34, 95% confidence interval = 1.18-1.53). Self-involvement in decision-making is important to the European public. However, a large proportion of the public prefer to not make decisions about their care in advance of incapacity. Financial hardship, educational attainment, age, and preferences regarding quality and quantity of life require further examination; these factors should be considered in relation to policy.

  5. Practice Patterns and Outcomes for Pemetrexed Plus Platinum Doublet as Neoadjuvant Chemotherapy in Adenocarcinomas of Lung: Looking Beyond the Usual Paradigm.

    PubMed

    Noronha, V; Zanwar, S; Joshi, A; Patil, V M; Mahajan, A; Janu, A; Agarwal, J P; Bhargava, P; Kapoor, A; Prabhash, K

    2018-01-01

    Neoadjuvant chemotherapy (NACT) is the standard of care in non-small cell lung cancers (NSCLC) with locally advanced N2 disease. There is a scarcity of data for the pemetrexed-platinum regimen as NACT. Also, apart from N2 disease, the role of NACT in locally advanced NSCLCs for tumour downstaging is unclear. Non-metastatic adenocarcinomas of lung treated with pemetrexed-platinum-based NACT were analysed. The patients with locoregionally advanced N2 disease and those who were borderline candidates for upfront definitive treatment were planned for NACT after discussion in a multidisciplinary clinic. In total, four cycles of 3-weekly pemetrexed and platinum were delivered in the combined neoadjuvant and adjuvant setting. A response assessment was carried out using RECIST criteria. Progression-free (PFS) and overall survival were calculated using the Kaplan-Meier method. Of 114 patients, 96 evaluable patients received NACT with pemetrexed-platinum. The most common indication for NACT was N2 disease at baseline (46.8%). The objective response rate was 36.4% (95% confidence interval 22-52%), including two complete and 32 partial responses, whereas 12.5% of patients had progressive disease on NACT. The median PFS was 14 months (95% confidence interval 10.7-17.3) and the median overall survival was 22 months (95% confidence interval 15.6-28.4) at a median follow-up of 16 months. There was a significant improvement in the overall survival of patients undergoing definitive therapy versus no definitive therapy (median overall survival 25 months [95% confidence interval 19.6-30.4] versus 12 months [95% confidence interval 3.2-20.7], respectively; P = 0.015, hazard ratio 0.56 [95% confidence interval 0.3-0.9]). Among patients who could not undergo definitive chemoradiation upfront due to dosimetric constraints (n = 34), 24 (70.6%) patients finally underwent definitive therapy after NACT. Pemetrexed-platinum-based NACT seems to be an effective option and many borderline cases, where upfront definitive therapy is not feasible, may become amenable to the same after incorporation of NACT. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  6. Provider use of a participatory decision-making style with youth and caregivers and satisfaction with pediatric asthma visits.

    PubMed

    Sleath, Betsy; Carpenter, Delesha M; Coyne, Imelda; Davis, Scott A; Hayes Watson, Claire; Loughlin, Ceila E; Garcia, Nacire; Reuland, Daniel S; Tudor, Gail E

    2018-01-01

    We conducted a randomized controlled trial to test the effectiveness of an asthma question prompt list with video intervention to engage the youth during clinic visits. We examined whether the intervention was associated with 1) providers including youth and caregiver inputs more into asthma treatment regimens, 2) youth and caregivers rating providers as using more of a participatory decision-making style, and 3) youth and caregivers being more satisfied with visits. English- or Spanish-speaking youth aged 11-17 years with persistent asthma and their caregivers were recruited from four pediatric clinics and randomized to the intervention or usual care groups. The youth in the intervention group watched the video with their caregivers on an iPad and completed a one-page asthma question prompt list before their clinic visits. All visits were audiotaped. Generalized estimating equations were used to analyze the data. Forty providers and their patients (n=359) participated in this study. Providers included youth input into the asthma management treatment regimens during 2.5% of visits and caregiver input during 3.3% of visits. The youth in the intervention group were significantly more likely to rate their providers as using more of a participatory decision-making style (odds ratio=1.7, 95% confidence interval=1.1, 2.5). White caregivers were significantly more likely to rate the providers as more participatory (odds ratio=2.3, 95% confidence interval=1.2, 4.4). Youth (beta=4.9, 95% confidence interval=3.3, 6.5) and caregivers (beta=7.5, 95% confidence interval=3.1, 12.0) who rated their providers as being more participatory were significantly more satisfied with their visits. Youth (beta=-1.9, 95% confidence interval=-3.4, -0.4) and caregivers (beta=-8.8, 95% confidence interval=-16.2, -1.3) who spoke Spanish at home were less satisfied with visits. The intervention did not increase the inclusion of youth and caregiver inputs into asthma treatment regimens. However, it did increase the youth's perception of participatory decision-making style of the providers, and this in turn was associated with greater satisfaction.

  7. Clinical impact and predictors of complete ST segment resolution after primary percutaneous coronary intervention: A subanalysis of the ATLANTIC Trial.

    PubMed

    Fabris, Enrico; van 't Hof, Arnoud; Hamm, Christian W; Lapostolle, Frédéric; Lassen, Jens F; Goodman, Shaun G; Ten Berg, Jurriën M; Bolognese, Leonardo; Cequier, Angel; Chettibi, Mohamed; Hammett, Christopher J; Huber, Kurt; Janzon, Magnus; Merkely, Béla; Storey, Robert F; Zeymer, Uwe; Cantor, Warren J; Tsatsaris, Anne; Kerneis, Mathieu; Diallo, Abdourahmane; Vicaut, Eric; Montalescot, Gilles

    2017-08-01

    In the ATLANTIC (Administration of Ticagrelor in the catheterization laboratory or in the Ambulance for New ST elevation myocardial Infarction to open the Coronary artery) trial the early use of aspirin, anticoagulation, and ticagrelor coupled with very short medical contact-to-balloon times represent good indicators of optimal treatment of ST-elevation myocardial infarction and an ideal setting to explore which factors may influence coronary reperfusion beyond a well-established pre-hospital system. This study sought to evaluate predictors of complete ST-segment resolution after percutaneous coronary intervention in ST-elevation myocardial infarction patients enrolled in the ATLANTIC trial. ST-segment analysis was performed on electrocardiograms recorded at the time of inclusion (pre-hospital electrocardiogram), and one hour after percutaneous coronary intervention (post-percutaneous coronary intervention electrocardiogram) by an independent core laboratory. Complete ST-segment resolution was defined as ≥70% ST-segment resolution. Complete ST-segment resolution occurred post-percutaneous coronary intervention in 54.9% ( n=800/1456) of patients and predicted lower 30-day composite major adverse cardiovascular and cerebrovascular events (odds ratio 0.35, 95% confidence interval 0.19-0.65; p<0.01), definite stent thrombosis (odds ratio 0.18, 95% confidence interval 0.02-0.88; p=0.03), and total mortality (odds ratio 0.43, 95% confidence interval 0.19-0.97; p=0.04). In multivariate analysis, independent negative predictors of complete ST-segment resolution were the time from symptoms to pre-hospital electrocardiogram (odds ratio 0.91, 95% confidence interval 0.85-0.98; p<0.01) and diabetes mellitus (odds ratio 0.6, 95% confidence interval 0.44-0.83; p<0.01); pre-hospital ticagrelor treatment showed a favorable trend for complete ST-segment resolution (odds ratio 1.22, 95% confidence interval 0.99-1.51; p=0.06). This study confirmed that post-percutaneous coronary intervention complete ST-segment resolution is a valid surrogate marker for cardiovascular clinical outcomes. In the current era of ST-elevation myocardial infarction reperfusion, patients' delay and diabetes mellitus are independent predictors of poor reperfusion and need specific attention in the future.

  8. Circulating tocopherols and risk of coronary artery disease: A systematic review and meta-analysis.

    PubMed

    Li, Guangxiao; Li, Ying; Chen, Xin; Sun, Hao; Hou, Xiaowen; Shi, Jingpu

    2016-05-01

    Circulating level of tocopherols was supposed to be associated with risk of coronary artery disease. However, the results from previous studies remain controversial. Therefore, we conducted a meta-analysis based on observational studies to evaluate the association between circulating tocopherols and coronary artery disease risk for the first time. Meta-analysis. PubMed, Embase and Cochrane databases were searched to retrieve articles published during January 1995 and May 2015. Articles were included if they provided sufficient information to calculate the weighted mean difference and its corresponding 95% confidence interval. Circulating level of total tocopherols was significantly lower in coronary artery disease patients than that in controls (weighted mean difference -4.33 μmol/l, 95% confidence interval -6.74 to -1.91, P < 0.01). However, circulating α-tocopherol alone was not significantly associated with coronary artery disease risk. Results from subgroup analyses showed that a lower level of circulating total tocopherols was merely associated with higher coronary artery disease risk in studies with higher sex ratio in cases (<2, weighted mean difference -0.07 μmol/l, 95% confidence interval -1.15 to 1.00, P = 0.90; ≥ 2, weighted mean difference -6.00 μmol/l, 95% confidence interval -9.76 to -2.22, P < 0.01). Similarly, a lower level of circulating total tocopherols was associated with early onset coronary artery disease rather than late onset coronary artery disease (<60 years, weighted mean difference -5.40 μmol/l, 95% confidence interval -9.22 to -1.57, P < 0.01; ≥ 60 years, weighted mean difference -1.37 μmol/l, 95% confidence interval -3.48 to 0.74, P = 0.20). We also found some discrepancies in circulating total tocopherols when the studies were stratified by matching status and assay methods. Our findings suggest that a deficiency in circulating total tocopherols might be associated with higher coronary artery disease risk. Whereas circulating α-tocopherol alone could not protect us from developing coronary artery disease. Further prospective studies were warranted to confirm our findings. © The European Society of Cardiology 2015.

  9. Association of Preoperative Urinary Uromodulin with AKI after Cardiac Surgery.

    PubMed

    Garimella, Pranav S; Jaber, Bertrand L; Tighiouart, Hocine; Liangos, Orfeas; Bennett, Michael R; Devarajan, Prasad; El-Achkar, Tarek M; Sarnak, Mark J

    2017-01-06

    AKI is a serious complication after cardiac surgery. Although high urinary concentrations of the tubular protein uromodulin, a marker of tubular health, are associated with less AKI in animal models, its relationship in humans is unknown. A post hoc analysis of a prospective cohort study of 218 adults undergoing on-pump cardiac surgery between 2004 and 2011 was conducted. Multivariable logistic and linear regression analyses were used to evaluate the associations of preoperative urinary uromodulin-to-creatinine ratio with postoperative AKI (defined as a rise in serum creatinine of >0.3 mg/dl or >1.5 times baseline); severe AKI (doubling of creatinine or need for dialysis) and peak postoperative serum creatinine over the first 72 hours. Mean age was 68 years, 27% were women, 95% were white, and the median uromodulin-to-creatinine ratio was 10.0 μg/g. AKI developed in 64 (29%) patients. Lower urinary uromodulin-to-creatinine ratio was associated with higher odds for AKI (odds ratio, 1.49 per 1-SD lower uromodulin; 95% confidence interval, 1.04 to 2.13), which was marginally attenuated after multivariable adjustment (odds ratio, 1.43; 95% confidence interval, 0.99 to 2.07). The lowest uromodulin-to-creatinine ratio quartile was also associated with higher odds for AKI relative to the highest quartile (odds ratio, 2.94; 95% confidence interval, 1.19 to 7.26), which was slightly attenuated after multivariable adjustment (odds ratio, 2.43; 95% confidence interval, 0.91 to 6.48). A uromodulin-to-creatinine ratio below the median was associated with higher adjusted odds for severe AKI, although this did not reach statistical significance (odds ratio, 4.03; 95% confidence interval, 0.87 to 18.70). Each 1-SD lower uromodulin-to-creatinine ratio was associated with a higher adjusted mean peak serum creatinine (0.07 mg/dl per SD; 95% confidence interval, 0.02 to 0.13). Lower uromodulin-to-creatinine ratio is associated with higher odds of AKI and higher peak serum creatinine after cardiac surgery. Additional studies are needed to confirm these preliminary results. Copyright © 2016 by the American Society of Nephrology.

  10. Association of Preoperative Urinary Uromodulin with AKI after Cardiac Surgery

    PubMed Central

    Garimella, Pranav S.; Jaber, Bertrand L.; Tighiouart, Hocine; Liangos, Orfeas; Bennett, Michael R.; Devarajan, Prasad; El-Achkar, Tarek M.

    2017-01-01

    Background and objectives AKI is a serious complication after cardiac surgery. Although high urinary concentrations of the tubular protein uromodulin, a marker of tubular health, are associated with less AKI in animal models, its relationship in humans is unknown. Design, setting, participants, & measurements A post hoc analysis of a prospective cohort study of 218 adults undergoing on–pump cardiac surgery between 2004 and 2011 was conducted. Multivariable logistic and linear regression analyses were used to evaluate the associations of preoperative urinary uromodulin-to-creatinine ratio with postoperative AKI (defined as a rise in serum creatinine of >0.3 mg/dl or >1.5 times baseline); severe AKI (doubling of creatinine or need for dialysis) and peak postoperative serum creatinine over the first 72 hours. Results Mean age was 68 years, 27% were women, 95% were white, and the median uromodulin-to-creatinine ratio was 10.0 μg/g. AKI developed in 64 (29%) patients. Lower urinary uromodulin-to-creatinine ratio was associated with higher odds for AKI (odds ratio, 1.49 per 1-SD lower uromodulin; 95% confidence interval, 1.04 to 2.13), which was marginally attenuated after multivariable adjustment (odds ratio, 1.43; 95% confidence interval, 0.99 to 2.07). The lowest uromodulin-to-creatinine ratio quartile was also associated with higher odds for AKI relative to the highest quartile (odds ratio, 2.94; 95% confidence interval, 1.19 to 7.26), which was slightly attenuated after multivariable adjustment (odds ratio, 2.43; 95% confidence interval, 0.91 to 6.48). A uromodulin-to-creatinine ratio below the median was associated with higher adjusted odds for severe AKI, although this did not reach statistical significance (odds ratio, 4.03; 95% confidence interval, 0.87 to 18.70). Each 1-SD lower uromodulin-to-creatinine ratio was associated with a higher adjusted mean peak serum creatinine (0.07 mg/dl per SD; 95% confidence interval, 0.02 to 0.13). Conclusions Lower uromodulin-to-creatinine ratio is associated with higher odds of AKI and higher peak serum creatinine after cardiac surgery. Additional studies are needed to confirm these preliminary results. PMID:27797887

  11. Association of CKD with Outcomes Among Patients Undergoing Transcatheter Aortic Valve Implantation

    PubMed Central

    Kaier, Klaus; Kaleschke, Gerrit; Gebauer, Katrin; Meyborg, Matthias; Malyar, Nasser M.; Freisinger, Eva; Baumgartner, Helmut; Reinecke, Holger; Reinöhl, Jochen

    2017-01-01

    Background and objectives Despitethe multiple depicted associations of CKD with reduced cardiovascular and overall prognoses, the association of CKD with outcome of patients undergoing transcatheter aortic valve implantation has still not been well described. Design, setting, participants, & measurements Data from all hospitalized patients who underwent transcatheter aortic valve implantation procedures between January 1, 2010 and December 31, 2013 in Germany were evaluated regarding influence of CKD, even in the earlier stages, on morbidity, in-hospital outcomes, and costs. Results A total of 28,716 patients were treated with transcatheter aortic valve implantation. A total of 11,189 (39.0%) suffered from CKD. Patients with CKD were predominantly women; had higher rates of comorbidities, such as coronary artery disease, heart failure at New York Heart Association 3/4, peripheral artery disease, and diabetes; and had a 1.3-fold higher estimated logistic European System for Cardiac Operative Risk Evaluation value. In-hospital mortality was independently associated with CKD stage ≥3 (up to odds ratio, 1.71; 95% confidence interval, 1.35 to 2.17; P<0.05), bleeding was independently associated with CKD stage ≥4 (up to odds ratio, 1.82; 95% confidence interval, 1.47 to 2.24; P<0.001), and AKI was independently associated with CKD stages 3 (odds ratio, 1.83; 95% confidence interval, 1.62 to 2.06) and 4 (odds ratio, 2.33; 95% confidence interval, 1.92 to 2.83 both P<0.001). The stroke risk, in contrast, was lower for patients with CKD stages 4 (odds ratio, 0.23; 95% confidence interval, 0.16 to 0.33) and 5 (odds ratio, 0.24; 95% confidence interval, 0.15 to 0.39; both P<0.001). Lengths of hospital stay were, on average, 1.2-fold longer, whereas reimbursements were, on average, only 1.03-fold higher in patients who suffered from CKD. Conclusions This analysis illustrates for the first time on a nationwide basis the association of CKD with adverse outcomes in patients who underwent transcatheter aortic valve implantation. Thus, classification of CKD stages before transcatheter aortic valve implantation is important for appropriate risk stratification. PMID:28289067

  12. Association of CKD with Outcomes Among Patients Undergoing Transcatheter Aortic Valve Implantation.

    PubMed

    Lüders, Florian; Kaier, Klaus; Kaleschke, Gerrit; Gebauer, Katrin; Meyborg, Matthias; Malyar, Nasser M; Freisinger, Eva; Baumgartner, Helmut; Reinecke, Holger; Reinöhl, Jochen

    2017-05-08

    Despitethe multiple depicted associations of CKD with reduced cardiovascular and overall prognoses, the association of CKD with outcome of patients undergoing transcatheter aortic valve implantation has still not been well described. Data from all hospitalized patients who underwent transcatheter aortic valve implantation procedures between January 1, 2010 and December 31, 2013 in Germany were evaluated regarding influence of CKD, even in the earlier stages, on morbidity, in-hospital outcomes, and costs. A total of 28,716 patients were treated with transcatheter aortic valve implantation. A total of 11,189 (39.0%) suffered from CKD. Patients with CKD were predominantly women; had higher rates of comorbidities, such as coronary artery disease, heart failure at New York Heart Association 3/4, peripheral artery disease, and diabetes; and had a 1.3-fold higher estimated logistic European System for Cardiac Operative Risk Evaluation value. In-hospital mortality was independently associated with CKD stage ≥3 (up to odds ratio, 1.71; 95% confidence interval, 1.35 to 2.17; P <0.05), bleeding was independently associated with CKD stage ≥4 (up to odds ratio, 1.82; 95% confidence interval, 1.47 to 2.24; P <0.001), and AKI was independently associated with CKD stages 3 (odds ratio, 1.83; 95% confidence interval, 1.62 to 2.06) and 4 (odds ratio, 2.33; 95% confidence interval, 1.92 to 2.83 both P <0.001). The stroke risk, in contrast, was lower for patients with CKD stages 4 (odds ratio, 0.23; 95% confidence interval, 0.16 to 0.33) and 5 (odds ratio, 0.24; 95% confidence interval, 0.15 to 0.39; both P <0.001). Lengths of hospital stay were, on average, 1.2-fold longer, whereas reimbursements were, on average, only 1.03-fold higher in patients who suffered from CKD. This analysis illustrates for the first time on a nationwide basis the association of CKD with adverse outcomes in patients who underwent transcatheter aortic valve implantation. Thus, classification of CKD stages before transcatheter aortic valve implantation is important for appropriate risk stratification. Copyright © 2017 by the American Society of Nephrology.

  13. Combined metformin-clomiphene in clomiphene-resistant polycystic ovary syndrome: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Abu Hashim, Hatem; Foda, Osama; Ghayaty, Essam

    2015-09-01

    Our objective was to compare the effectiveness of metformin plus clomiphene citrate vs. gonadotrophins, laparoscopic ovarian diathermy, aromatase inhibitors, N-acetyl-cysteine and other insulin sensitizers+clomiphene for improving fertility outcomes in women with clomiphene-resistant polycystic ovary syndrome. PubMed, SCOPUS and CENTRAL databases were searched until April 2014 with the key words: PCOS, polycystic ovary syndrome, metformin, clomiphene citrate, ovulation induction and pregnancy. The search was limited to articles conducted with humans and published in English. The PRISMA statement was followed. Twelve randomized controlled trials (n = 1411 women) were included. Ovulation and clinical pregnancy rates per woman randomized. Compared with gonadotrophins, the metformin+clomiphene combination resulted in significantly fewer ovulations (odds ratio 0.25; 95% confidence interval 0.15-0.41; p < 0.00001, 3 trials, I(2) = 85%, n = 323) and pregnancies (odds ratio 0.45; 95% confidence interval 0.27-0.75; p = 0.002, 3 trials, I(2) = 0%, n = 323). No significant differences were found when metformin+clomiphene was compared with laparoscopic ovarian diathermy (odds ratio 0.88; 95% confidence interval 0.53-1.47; p = 0.62, 1 trial, n = 282; odds ratio 0.96; 95% confidence interval 0.60-1.54; p = 0.88, 2 trials, I(2) = 0%, n = 332, for ovulation and pregnancy rates, respectively). Likewise, no differences were observed in comparison with aromatase inhibitors (odds ratio 0.88; 95% confidence interval 0.58-1.34; p = 0.55, 3 trials, I(2) = 3%, n = 409; odds ratio 0.85; 95% confidence interval 0.53-1.36; p = 0.50, 2 trials, n = 309, for ovulation and pregnancy rates, respectively). There is evidence for the superiority of gonadotrophins, but the metformin+clomiphene combination is mainly relevant for clomiphene-resistant polycystic ovary syndrome patients and, if not effective, a next step could be gonadotrophins. More attempts with metformin+clomiphene are only relevant if there is limited access to gonadotrophins. © 2015 Nordic Federation of Societies of Obstetrics and Gynecology.

  14. Optimization of Biomathematical Model Predictions for Cognitive Performance Impairment in Individuals: Accounting for Unknown Traits and Uncertain States in Homeostatic and Circadian Processes

    PubMed Central

    Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.

    2007-01-01

    Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385

  15. [Uncertainty characterization approaches for ecological risk assessment of polycyclic aromatic hydrocarbon in Taihu Lake].

    PubMed

    Guo, Guang-Hui; Wu, Feng-Chang; He, Hong-Ping; Feng, Cheng-Lian; Zhang, Rui-Qing; Li, Hui-Xian

    2012-04-01

    Probabilistic approaches, such as Monte Carlo Sampling (MCS) and Latin Hypercube Sampling (LHS), and non-probabilistic approaches, such as interval analysis, fuzzy set theory and variance propagation, were used to characterize uncertainties associated with risk assessment of sigma PAH8 in surface water of Taihu Lake. The results from MCS and LHS were represented by probability distributions of hazard quotients of sigma PAH8 in surface waters of Taihu Lake. The probabilistic distribution of hazard quotient were obtained from the results of MCS and LHS based on probabilistic theory, which indicated that the confidence intervals of hazard quotient at 90% confidence level were in the range of 0.000 18-0.89 and 0.000 17-0.92, with the mean of 0.37 and 0.35, respectively. In addition, the probabilities that the hazard quotients from MCS and LHS exceed the threshold of 1 were 9.71% and 9.68%, respectively. The sensitivity analysis suggested the toxicity data contributed the most to the resulting distribution of quotients. The hazard quotient of sigma PAH8 to aquatic organisms ranged from 0.000 17 to 0.99 using interval analysis. The confidence interval was (0.001 5, 0.016 3) at the 90% confidence level calculated using fuzzy set theory, and the confidence interval was (0.000 16, 0.88) at the 90% confidence level based on the variance propagation. These results indicated that the ecological risk of sigma PAH8 to aquatic organisms were low. Each method has its own set of advantages and limitations, which was based on different theory; therefore, the appropriate method should be selected on a case-by-case to quantify the effects of uncertainties on the ecological risk assessment. Approach based on the probabilistic theory was selected as the most appropriate method to assess the risk of sigma PAH8 in surface water of Taihu Lake, which provided an important scientific foundation of risk management and control for organic pollutants in water.

  16. Modeling epilepsy disparities among ethnic groups in Philadelphia, PA

    PubMed Central

    Wheeler, David C.; Waller, Lance A.; Elliott, John O.

    2014-01-01

    SUMMARY The Centers for Disease Control and Prevention defined epilepsy as an emerging public health issue in a recent report and emphasized the importance of epilepsy studies in minorities and people of low socioeconomic status. Previous research has suggested that the incidence rate for epilepsy is positively associated with various measures of social and economic disadvantage. In response, we utilize hierarchical Bayesian models to analyze health disparities in epilepsy and seizure risks among multiple ethnicities in the city of Philadelphia, Pennsylvania. The goals of the analysis are to highlight any overall significant disparities in epilepsy risks between the populations of Caucasians, African Americans, and Hispanics in the study area during the years 2002–2004 and to visualize the spatial pattern of epilepsy risks by ethnicity to indicate where certain ethnic populations were most adversely affected by epilepsy within the study area. Results of the Bayesian model indicate that Hispanics have the highest epilepsy risk overall, followed by African Americans, and then Caucasians. There are significant increases in relative risk for both African Americans and Hispanics when compared with Caucasians, as indicated by the posterior mean estimates of 2.09 with a 95 per cent credible interval of (1.67, 2.62) for African Americans and 2.97 with a 95 per cent credible interval of (2.37, 3.71) for Hispanics. Results also demonstrate that using a Bayesian analysis in combination with geographic information system (GIS) technology can reveal spatial patterns in patient data and highlight areas of disparity in epilepsy risk among subgroups of the population. PMID:18381676

  17. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Associations of High-Grade Glioma With Glioma Risk Alleles and Histories of Allergy and Smoking

    PubMed Central

    Lachance, Daniel H.; Yang, Ping; Johnson, Derek R.; Decker, Paul A.; Kollmeyer, Thomas M.; McCoy, Lucie S.; Rice, Terri; Xiao, Yuanyuan; Ali-Osman, Francis; Wang, Frances; Stoddard, Shawn M.; Sprau, Debra J.; Kosel, Matthew L.; Wiencke, John K.; Wiemels, Joseph L.; Patoka, Joseph S.; Davis, Faith; McCarthy, Bridget; Rynearson, Amanda L.; Worra, Joel B.; Fridley, Brooke L.; O’Neill, Brian Patrick; Buckner, Jan C.; Il’yasova, Dora; Jenkins, Robert B.; Wrensch, Margaret R.

    2011-01-01

    Glioma risk has consistently been inversely associated with allergy history but not with smoking history despite putative biologic plausibility. Data from 855 high-grade glioma cases and 1,160 controls from 4 geographic regions of the United States during 1997–2008 were analyzed for interactions between allergy and smoking histories and inherited variants in 5 established glioma risk regions: 5p15.3 (TERT), 8q24.21 (CCDC26/MLZE), 9p21.3 (CDKN2B), 11q23.3 (PHLDB1/DDX6), and 20q13.3 (RTEL1). The inverse relation between allergy and glioma was stronger among those who did not (odds ratioallergy-glioma = 0.40, 95% confidence interval: 0.28, 0.58) versus those who did (odds ratioallergy-glioma = 0.76, 95% confidence interval: 0.59, 0.97; Pinteraction = 0.02) carry the 9p21.3 risk allele. However, the inverse association with allergy was stronger among those who carried (odds ratioallergy-glioma = 0.44, 95% confidence interval: 0.29, 0.68) versus those who did not carry (odds ratioallergy-glioma = 0.68, 95% confidence interval: 0.54, 0.86) the 20q13.3 glioma risk allele, but this interaction was not statistically significant (P = 0.14). No relation was observed between glioma risk and smoking (odds ratio = 0.92, 95% confidence interval: 0.77, 1.10; P = 0.37), and there were no interactions for glioma risk of smoking history with any of the risk alleles. The authors’ observations are consistent with a recent report that the inherited glioma risk variants in chromosome regions 9p21.3 and 20q13.3 may modify the inverse association of allergy and glioma. PMID:21742680

  19. Gastric dilation-volvulus in dogs attending UK emergency-care veterinary practices: prevalence, risk factors and survival.

    PubMed

    O'Neill, D G; Case, J; Boag, A K; Church, D B; McGreevy, P D; Thomson, P C; Brodbelt, D C

    2017-11-01

    To report prevalence, risk factors and clinical outcomes for presumptive gastric dilation-volvulus diagnosed among an emergency-care population of UK dogs. The study used a cross-sectional design using emergency-care veterinary clinical records from the VetCompass Programme spanning September 1, 2012 to February 28, 2014 and risk factor analysis using multivariable logistic regression modelling. The study population comprised 77,088 dogs attending 50 Vets Now clinics. Overall, 492 dogs had presumptive gastric dilation-volvulus diagnoses, giving a prevalence of 0·64% (95% Confidence interval: 0·58 to 0·70%). Compared with cross-bred dogs, breeds with the highest odds ratios for the diagnosis of presumptive gastric dilation-volvulus were the great Dane (odds ratio: 114·3, 95% Confidence interval 55·1 to 237·1, P<0·001), akita (odds ratio: 84·4, 95% Confidence interval 33·6 to 211·9, P<0·001) and dogue de Bordeaux (odds ratio: 82·9, 95% Confidence interval 39·0 to 176·3, P<0·001). Odds increased as dogs aged up to 12 years and neutered male dogs had 1·3 (95% Confidence interval 1·0 to 1·8, P=0·041) times the odds compared with entire females. Of the cases that were presented alive, 49·7% survived to discharge overall, but 79·3% of surgical cases survived to discharge. Approximately 80% of surgically managed cases survived to discharge. Certain large breeds were highly predisposed. © 2017 British Small Animal Veterinary Association.

  20. Diagnostic accuracy of intracellular mycobacterium tuberculosis detection for tuberculous meningitis.

    PubMed

    Feng, Guo-dong; Shi, Ming; Ma, Lei; Chen, Ping; Wang, Bing-ju; Zhang, Min; Chang, Xiao-lin; Su, Xiu-chu; Yang, Yi-ning; Fan, Xin-hong; Dai, Wen; Liu, Ting-ting; He, Ying; Bian, Ting; Duan, Li-xin; Li, Jin-ge; Hao, Xiao-ke; Liu, Jia-yun; Xue, Xin; Song, Yun-zhang; Wu, Hai-qin; Niu, Guo-qiang; Zhang, Li; Han, Cui-juan; Lin, Hong; Lin, Zhi-hui; Liu, Jian-jun; Jian, Qian; Zhang, Jin-she; Tian, Ye; Zhou, Bai-yu; Wang, Jing; Xue, Chang-hu; Han, Xiao-fang; Wang, Jian-feng; Wang, Shou-lian; Thwaites, Guy E; Zhao, Gang

    2014-02-15

    Early diagnosis and treatment of tuberculous meningitis saves lives, but current laboratory diagnostic tests lack sensitivity. We investigated whether the detection of intracellular bacteria by a modified Ziehl-Neelsen stain and early secretory antigen target (ESAT)-6 in cerebrospinal fluid leukocytes improves tuberculous meningitis diagnosis. Cerebrospinal fluid specimens from patients with suspected tuberculous meningitis were stained by conventional Ziehl-Neelsen stain, a modified Ziehl-Neelsen stain involving cytospin slides with Triton processing, and an ESAT-6 immunocytochemical stain. Acid-fast bacteria and ESAT-6-expressing leukocytes were detected by microscopy. All tests were performed prospectively in a central laboratory by experienced technicians masked to the patients' final diagnosis. Two hundred and eighty patients with suspected tuberculous meningitis were enrolled. Thirty-seven had Mycobacterium tuberculosis cultured from cerebrospinal fluid; 40 had a microbiologically confirmed alternative diagnosis; the rest had probable or possible tuberculous meningitis according to published criteria. Against a clinical diagnostic gold standard the sensitivity of conventional Ziehl-Neelsen stain was 3.3% (95% confidence interval, 1.6-6.7%), compared with 82.9% (95% confidence interval, 77.4-87.3%) for modified Ziehl-Neelsen stain and 75.1% (95% confidence interval, 68.8-80.6%) for ESAT-6 immunostain. Intracellular bacteria were seen in 87.8% of the slides positive by the modified Ziehl-Neelsen stain. The specificity of modified Ziehl-Neelsen and ESAT-6 stain was 85.0% (95% confidence interval, 69.4-93.8%) and 90.0% (95% confidence interval, 75.4-96.7%), respectively. Enhanced bacterial detection by simple modification of the Ziehl-Neelsen stain and an ESAT-6 intracellular stain improve the laboratory diagnosis of tuberculous meningitis.

Top