Sample records for trial bayesian analysis

  1. Bayesian survival analysis in clinical trials: What methods are used in practice?

    PubMed

    Brard, Caroline; Le Teuff, Gwénaël; Le Deley, Marie-Cécile; Hampson, Lisa V

    2017-02-01

    Background Bayesian statistics are an appealing alternative to the traditional frequentist approach to designing, analysing, and reporting of clinical trials, especially in rare diseases. Time-to-event endpoints are widely used in many medical fields. There are additional complexities to designing Bayesian survival trials which arise from the need to specify a model for the survival distribution. The objective of this article was to critically review the use and reporting of Bayesian methods in survival trials. Methods A systematic review of clinical trials using Bayesian survival analyses was performed through PubMed and Web of Science databases. This was complemented by a full text search of the online repositories of pre-selected journals. Cost-effectiveness, dose-finding studies, meta-analyses, and methodological papers using clinical trials were excluded. Results In total, 28 articles met the inclusion criteria, 25 were original reports of clinical trials and 3 were re-analyses of a clinical trial. Most trials were in oncology (n = 25), were randomised controlled (n = 21) phase III trials (n = 13), and half considered a rare disease (n = 13). Bayesian approaches were used for monitoring in 14 trials and for the final analysis only in 14 trials. In the latter case, Bayesian survival analyses were used for the primary analysis in four cases, for the secondary analysis in seven cases, and for the trial re-analysis in three cases. Overall, 12 articles reported fitting Bayesian regression models (semi-parametric, n = 3; parametric, n = 9). Prior distributions were often incompletely reported: 20 articles did not define the prior distribution used for the parameter of interest. Over half of the trials used only non-informative priors for monitoring and the final analysis (n = 12) when it was specified. Indeed, no articles fitting Bayesian regression models placed informative priors on the parameter of interest. The prior for the treatment effect was based on historical data in only four trials. Decision rules were pre-defined in eight cases when trials used Bayesian monitoring, and in only one case when trials adopted a Bayesian approach to the final analysis. Conclusion Few trials implemented a Bayesian survival analysis and few incorporated external data into priors. There is scope to improve the quality of reporting of Bayesian methods in survival trials. Extension of the Consolidated Standards of Reporting Trials statement for reporting Bayesian clinical trials is recommended.

  2. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed

    Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A

    2011-07-01

    There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed Central

    Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.

    2014-01-01

    Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792

  4. Prior elicitation and Bayesian analysis of the Steroids for Corneal Ulcers Trial.

    PubMed

    See, Craig W; Srinivasan, Muthiah; Saravanan, Somu; Oldenburg, Catherine E; Esterberg, Elizabeth J; Ray, Kathryn J; Glaser, Tanya S; Tu, Elmer Y; Zegans, Michael E; McLeod, Stephen D; Acharya, Nisha R; Lietman, Thomas M

    2012-12-01

    To elicit expert opinion on the use of adjunctive corticosteroid therapy in bacterial corneal ulcers. To perform a Bayesian analysis of the Steroids for Corneal Ulcers Trial (SCUT), using expert opinion as a prior probability. The SCUT was a placebo-controlled trial assessing visual outcomes in patients receiving topical corticosteroids or placebo as adjunctive therapy for bacterial keratitis. Questionnaires were conducted at scientific meetings in India and North America to gauge expert consensus on the perceived benefit of corticosteroids as adjunct treatment. Bayesian analysis, using the questionnaire data as a prior probability and the primary outcome of SCUT as a likelihood, was performed. For comparison, an additional Bayesian analysis was performed using the results of the SCUT pilot study as a prior distribution. Indian respondents believed there to be a 1.21 Snellen line improvement, and North American respondents believed there to be a 1.24 line improvement with corticosteroid therapy. The SCUT primary outcome found a non-significant 0.09 Snellen line benefit with corticosteroid treatment. The results of the Bayesian analysis estimated a slightly greater benefit than did the SCUT primary analysis (0.19 lines verses 0.09 lines). Indian and North American experts had similar expectations on the effectiveness of corticosteroids in bacterial corneal ulcers; that corticosteroids would markedly improve visual outcomes. Bayesian analysis produced results very similar to those produced by the SCUT primary analysis. The similarity in result is likely due to the large sample size of SCUT and helps validate the results of SCUT.

  5. Making Sense of a Negative Clinical Trial Result: A Bayesian Analysis of a Clinical Trial of Lorazepam and Diazepam for Pediatric Status Epilepticus.

    PubMed

    Chamberlain, Daniel B; Chamberlain, James M

    2017-01-01

    We demonstrate the application of a Bayesian approach to a recent negative clinical trial result. A Bayesian analysis of such a trial can provide a more useful interpretation of results and can incorporate previous evidence. This was a secondary analysis of the efficacy and safety results of the Pediatric Seizure Study, a randomized clinical trial of lorazepam versus diazepam for pediatric status epilepticus. We included the published results from the only prospective pediatric study of status in a Bayesian hierarchic model, and we performed sensitivity analyses on the amount of pooling between studies. We evaluated 3 summary analyses for the results: superiority, noninferiority (margin <-10%), and practical equivalence (within ±10%). Consistent with the original study's classic analysis of study results, we did not demonstrate superiority of lorazepam over diazepam. There is a 95% probability that the true efficacy of lorazepam is in the range of 66% to 80%. For both the efficacy and safety outcomes, there was greater than 95% probability that lorazepam is noninferior to diazepam, and there was greater than 90% probability that the 2 medications are practically equivalent. The results were largely driven by the current study because of the sample sizes of our study (n=273) and the previous pediatric study (n=61). Because Bayesian analysis estimates the probability of one or more hypotheses, such an approach can provide more useful information about the meaning of the results of a negative trial outcome. In the case of pediatric status epilepticus, it is highly likely that lorazepam is noninferior and practically equivalent to diazepam. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  6. A Bayesian Analysis of a Randomized Clinical Trial Comparing Antimetabolite Therapies for Non-Infectious Uveitis.

    PubMed

    Browne, Erica N; Rathinam, Sivakumar R; Kanakath, Anuradha; Thundikandy, Radhika; Babu, Manohar; Lietman, Thomas M; Acharya, Nisha R

    2017-02-01

    To conduct a Bayesian analysis of a randomized clinical trial (RCT) for non-infectious uveitis using expert opinion as a subjective prior belief. A RCT was conducted to determine which antimetabolite, methotrexate or mycophenolate mofetil, is more effective as an initial corticosteroid-sparing agent for the treatment of intermediate, posterior, and pan-uveitis. Before the release of trial results, expert opinion on the relative effectiveness of these two medications was collected via online survey. Members of the American Uveitis Society executive committee were invited to provide an estimate for the relative decrease in efficacy with a 95% credible interval (CrI). A prior probability distribution was created from experts' estimates. A Bayesian analysis was performed using the constructed expert prior probability distribution and the trial's primary outcome. A total of 11 of the 12 invited uveitis specialists provided estimates. Eight of 11 experts (73%) believed mycophenolate mofetil is more effective. The group prior belief was that the odds of treatment success for patients taking mycophenolate mofetil were 1.4-fold the odds of those taking methotrexate (95% CrI 0.03-45.0). The odds of treatment success with mycophenolate mofetil compared to methotrexate was 0.4 from the RCT (95% confidence interval 0.1-1.2) and 0.7 (95% CrI 0.2-1.7) from the Bayesian analysis. A Bayesian analysis combining expert belief with the trial's result did not indicate preference for one drug. However, the wide credible interval leaves open the possibility of a substantial treatment effect. This suggests clinical equipoise necessary to allow a larger, more definitive RCT.

  7. Assessing noninferiority in a three-arm trial using the Bayesian approach.

    PubMed

    Ghosh, Pulak; Nathoo, Farouk; Gönen, Mithat; Tiwari, Ram C

    2011-07-10

    Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm trial consists of a placebo, a reference and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with comparing this reference to an experimental treatment. In this paper, we consider the analysis of non-inferiority trials using Bayesian methods which incorporate both parametric as well as semi-parametric models. The resulting testing approach is both flexible and robust. The benefit of the proposed Bayesian methods is assessed via simulation, based on a study examining home-based blood pressure interventions. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Prior Elicitation and Bayesian Analysis of the Steroids for Corneal Ulcers Trial

    PubMed Central

    See, Craig W.; Srinivasan, Muthiah; Saravanan, Somu; Oldenburg, Catherine E.; Esterberg, Elizabeth J.; Ray, Kathryn J.; Glaser, Tanya S.; Tu, Elmer Y.; Zegans, Michael E.; McLeod, Stephen D.; Acharya, Nisha R.; Lietman, Thomas M.

    2013-01-01

    Purpose To elicit expert opinion on the use of adjunctive corticosteroid therapy in bacterial corneal ulcers. To perform a Bayesian analysis of the Steroids for Corneal Ulcers Trial (SCUT), using expert opinion as a prior probability. Methods The SCUT was a placebo-controlled trial assessing visual outcomes in patients receiving topical corticosteroids or placebo as adjunctive therapy for bacterial keratitis. Questionnaires were conducted at scientific meetings in India and North America to gauge expert consensus on the perceived benefit of corticosteroids as adjunct treatment. Bayesian analysis, using the questionnaire data as a prior probability and the primary outcome of SCUT as a likelihood, was performed. For comparison, an additional Bayesian analysis was performed using the results of the SCUT pilot study as a prior distribution. Results Indian respondents believed there to be a 1.21 Snellen line improvement, and North American respondents believed there to be a 1.24 line improvement with corticosteroid therapy. The SCUT primary outcome found a non-significant 0.09 Snellen line benefit with corticosteroid treatment. The results of the Bayesian analysis estimated a slightly greater benefit than did the SCUT primary analysis (0.19 lines verses 0.09 lines). Conclusion Indian and North American experts had similar expectations on the effectiveness of corticosteroids in bacterial corneal ulcers; that corticosteroids would markedly improve visual outcomes. Bayesian analysis produced results very similar to those produced by the SCUT primary analysis. The similarity in result is likely due to the large sample size of SCUT and helps validate the results of SCUT. PMID:23171211

  9. Application of Bayesian Approach in Cancer Clinical Trial

    PubMed Central

    Bhattacharjee, Atanu

    2014-01-01

    The application of Bayesian approach in clinical trials becomes more useful over classical method. It is beneficial from design to analysis phase. The straight forward statement is possible to obtain through Bayesian about the drug treatment effect. Complex computational problems are simple to handle with Bayesian techniques. The technique is only feasible to performing presence of prior information of the data. The inference is possible to establish through posterior estimates. However, some limitations are present in this method. The objective of this work was to explore the several merits and demerits of Bayesian approach in cancer research. The review of the technique will be helpful for the clinical researcher involved in the oncology to explore the limitation and power of Bayesian techniques. PMID:29147387

  10. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  11. Bayesian statistical inference enhances the interpretation of contemporary randomized controlled trials.

    PubMed

    Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas

    2009-01-01

    Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.

  12. Bayesian randomized clinical trials: From fixed to adaptive design.

    PubMed

    Yin, Guosheng; Lam, Chi Kin; Shi, Haolun

    2017-08-01

    Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Using Bayesian Adaptive Trial Designs for Comparative Effectiveness Research: A Virtual Trial Execution.

    PubMed

    Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R

    2016-09-20

    Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.

  14. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data

    PubMed Central

    Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369

  15. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data.

    PubMed

    Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.

  16. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  17. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  18. A Bayesian Analysis of a Randomized Clinical Trial Comparing Antimetabolite Therapies for Non-Infectious Uveitis

    PubMed Central

    Browne, Erica N; Rathinam, Sivakumar R; Kanakath, Anuradha; Thundikandy, Radhika; Babu, Manohar; Lietman, Thomas M; Acharya, Nisha R

    2017-01-01

    Purpose To conduct a Bayesian analysis of a randomized clinical trial (RCT) for non-infectious uveitis using expert opinion as a subjective prior belief. Methods A RCT was conducted to determine which antimetabolite, methotrexate or mycophenolate mofetil, is more effective as an initial corticosteroid-sparing agent for the treatment of intermediate, posterior, and pan- uveitis. Before the release of trial results, expert opinion on the relative effectiveness of these two medications was collected via online survey. Members of the American Uveitis Society executive committee were invited to provide an estimate for the relative decrease in efficacy with a 95% credible interval (CrI). A prior probability distribution was created from experts’ estimates. A Bayesian analysis was performed using the constructed expert prior probability distribution and the trial’s primary outcome. Results 11 of 12 invited uveitis specialists provided estimates. Eight of 11 experts (73%) believed mycophenolate mofetil is more effective. The group prior belief was that the odds of treatment success for patients taking mycophenolate mofetil were 1.4-fold the odds of those taking methotrexate (95% CrI 0.03 – 45.0). The odds of treatment success with mycophenolate mofetil compared to methotrexate was 0.4 from the RCT (95% confidence interval 0.1–1.2) and 0.7 (95% CrI 0.2–1.7) from the Bayesian analysis. Conclusions A Bayesian analysis combining expert belief with the trial’s result did not indicate preference for one drug. However, the wide credible interval leaves open the possibility of a substantial treatment effect. This suggests clinical equipoise necessary to allow a larger, more definitive RCT. PMID:27982726

  19. A Bayesian-frequentist two-stage single-arm phase II clinical trial design.

    PubMed

    Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen

    2012-08-30

    It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.

  20. A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons

    ERIC Educational Resources Information Center

    Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.

    2016-01-01

    Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…

  1. Bayesian data analysis in observational comparative effectiveness research: rationale and examples.

    PubMed

    Olson, William H; Crivera, Concetta; Ma, Yi-Wen; Panish, Jessica; Mao, Lian; Lynch, Scott M

    2013-11-01

    Many comparative effectiveness research and patient-centered outcomes research studies will need to be observational for one or both of two reasons: first, randomized trials are expensive and time-consuming; and second, only observational studies can answer some research questions. It is generally recognized that there is a need to increase the scientific validity and efficiency of observational studies. Bayesian methods for the design and analysis of observational studies are scientifically valid and offer many advantages over frequentist methods, including, importantly, the ability to conduct comparative effectiveness research/patient-centered outcomes research more efficiently. Bayesian data analysis is being introduced into outcomes studies that we are conducting. Our purpose here is to describe our view of some of the advantages of Bayesian methods for observational studies and to illustrate both realized and potential advantages by describing studies we are conducting in which various Bayesian methods have been or could be implemented.

  2. Bayesian methods including nonrandomized study data increased the efficiency of postlaunch RCTs.

    PubMed

    Schmidt, Amand F; Klugkist, Irene; Klungel, Olaf H; Nielen, Mirjam; de Boer, Anthonius; Hoes, Arno W; Groenwold, Rolf H H

    2015-04-01

    Findings from nonrandomized studies on safety or efficacy of treatment in patient subgroups may trigger postlaunch randomized clinical trials (RCTs). In the analysis of such RCTs, results from nonrandomized studies are typically ignored. This study explores the trade-off between bias and power of Bayesian RCT analysis incorporating information from nonrandomized studies. A simulation study was conducted to compare frequentist with Bayesian analyses using noninformative and informative priors in their ability to detect interaction effects. In simulated subgroups, the effect of a hypothetical treatment differed between subgroups (odds ratio 1.00 vs. 2.33). Simulations varied in sample size, proportions of the subgroups, and specification of the priors. As expected, the results for the informative Bayesian analyses were more biased than those from the noninformative Bayesian analysis or frequentist analysis. However, because of a reduction in posterior variance, informative Bayesian analyses were generally more powerful to detect an effect. In scenarios where the informative priors were in the opposite direction of the RCT data, type 1 error rates could be 100% and power 0%. Bayesian methods incorporating data from nonrandomized studies can meaningfully increase power of interaction tests in postlaunch RCTs. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Analysis of phase II methodologies for single-arm clinical trials with multiple endpoints in rare cancers: An example in Ewing's sarcoma.

    PubMed

    Dutton, P; Love, S B; Billingham, L; Hassan, A B

    2018-05-01

    Trials run in either rare diseases, such as rare cancers, or rare sub-populations of common diseases are challenging in terms of identifying, recruiting and treating sufficient patients in a sensible period. Treatments for rare diseases are often designed for other disease areas and then later proposed as possible treatments for the rare disease after initial phase I testing is complete. To ensure the trial is in the best interests of the patient participants, frequent interim analyses are needed to force the trial to stop promptly if the treatment is futile or toxic. These non-definitive phase II trials should also be stopped for efficacy to accelerate research progress if the treatment proves to be particularly promising. In this paper, we review frequentist and Bayesian methods that have been adapted to incorporate two binary endpoints and frequent interim analyses. The Eurosarc Trial of Linsitinib in advanced Ewing Sarcoma (LINES) is used as a motivating example and provides a suitable platform to compare these approaches. The Bayesian approach provides greater design flexibility, but does not provide additional value over the frequentist approaches in a single trial setting when the prior is non-informative. However, Bayesian designs are able to borrow from any previous experience, using prior information to improve efficiency.

  4. The influence of baseline marijuana use on treatment of cocaine dependence: application of an informative-priors bayesian approach.

    PubMed

    Green, Charles; Schmitz, Joy; Lindsay, Jan; Pedroza, Claudia; Lane, Scott; Agnelli, Rob; Kjome, Kimberley; Moeller, F Gerard

    2012-01-01

    Marijuana use is prevalent among patients with cocaine dependence and often non-exclusionary in clinical trials of potential cocaine medications. The dual-focus of this study was to (1) examine the moderating effect of baseline marijuana use on response to treatment with levodopa/carbidopa for cocaine dependence; and (2) apply an informative-priors, Bayesian approach for estimating the probability of a subgroup-by-treatment interaction effect. A secondary data analysis of two previously published, double-blind, randomized controlled trials provided complete data for the historical (Study 1: N = 64 placebo), and current (Study 2: N = 113) data sets. Negative binomial regression evaluated Treatment Effectiveness Scores (TES) as a function of medication condition (levodopa/carbidopa, placebo), baseline marijuana use (days in past 30), and their interaction. Bayesian analysis indicated that there was a 96% chance that baseline marijuana use predicts differential response to treatment with levodopa/carbidopa. Simple effects indicated that among participants receiving levodopa/carbidopa the probability that baseline marijuana confers harm in terms of reducing TES was 0.981; whereas the probability that marijuana confers harm within the placebo condition was 0.163. For every additional day of marijuana use reported at baseline, participants in the levodopa/carbidopa condition demonstrated a 5.4% decrease in TES; while participants in the placebo condition demonstrated a 4.9% increase in TES. The potential moderating effect of marijuana on cocaine treatment response should be considered in future trial designs. Applying Bayesian subgroup analysis proved informative in characterizing this patient-treatment interaction effect.

  5. The Influence of Baseline Marijuana Use on Treatment of Cocaine Dependence: Application of an Informative-Priors Bayesian Approach

    PubMed Central

    Green, Charles; Schmitz, Joy; Lindsay, Jan; Pedroza, Claudia; Lane, Scott; Agnelli, Rob; Kjome, Kimberley; Moeller, F. Gerard

    2012-01-01

    Background: Marijuana use is prevalent among patients with cocaine dependence and often non-exclusionary in clinical trials of potential cocaine medications. The dual-focus of this study was to (1) examine the moderating effect of baseline marijuana use on response to treatment with levodopa/carbidopa for cocaine dependence; and (2) apply an informative-priors, Bayesian approach for estimating the probability of a subgroup-by-treatment interaction effect. Method: A secondary data analysis of two previously published, double-blind, randomized controlled trials provided complete data for the historical (Study 1: N = 64 placebo), and current (Study 2: N = 113) data sets. Negative binomial regression evaluated Treatment Effectiveness Scores (TES) as a function of medication condition (levodopa/carbidopa, placebo), baseline marijuana use (days in past 30), and their interaction. Results: Bayesian analysis indicated that there was a 96% chance that baseline marijuana use predicts differential response to treatment with levodopa/carbidopa. Simple effects indicated that among participants receiving levodopa/carbidopa the probability that baseline marijuana confers harm in terms of reducing TES was 0.981; whereas the probability that marijuana confers harm within the placebo condition was 0.163. For every additional day of marijuana use reported at baseline, participants in the levodopa/carbidopa condition demonstrated a 5.4% decrease in TES; while participants in the placebo condition demonstrated a 4.9% increase in TES. Conclusion: The potential moderating effect of marijuana on cocaine treatment response should be considered in future trial designs. Applying Bayesian subgroup analysis proved informative in characterizing this patient-treatment interaction effect. PMID:23115553

  6. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  7. How Recent History Affects Perception: The Normative Approach and Its Heuristic Approximation

    PubMed Central

    Raviv, Ofri; Ahissar, Merav; Loewenstein, Yonatan

    2012-01-01

    There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the “contraction bias”, in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments. PMID:23133343

  8. Analysis of phase II methodologies for single-arm clinical trials with multiple endpoints in rare cancers: An example in Ewing’s sarcoma

    PubMed Central

    Dutton, P; Love, SB; Billingham, L; Hassan, AB

    2016-01-01

    Trials run in either rare diseases, such as rare cancers, or rare sub-populations of common diseases are challenging in terms of identifying, recruiting and treating sufficient patients in a sensible period. Treatments for rare diseases are often designed for other disease areas and then later proposed as possible treatments for the rare disease after initial phase I testing is complete. To ensure the trial is in the best interests of the patient participants, frequent interim analyses are needed to force the trial to stop promptly if the treatment is futile or toxic. These non-definitive phase II trials should also be stopped for efficacy to accelerate research progress if the treatment proves to be particularly promising. In this paper, we review frequentist and Bayesian methods that have been adapted to incorporate two binary endpoints and frequent interim analyses. The Eurosarc Trial of Linsitinib in advanced Ewing Sarcoma (LINES) is used as a motivating example and provides a suitable platform to compare these approaches. The Bayesian approach provides greater design flexibility, but does not provide additional value over the frequentist approaches in a single trial setting when the prior is non-informative. However, Bayesian designs are able to borrow from any previous experience, using prior information to improve efficiency. PMID:27587590

  9. The utility of Bayesian predictive probabilities for interim monitoring of clinical trials

    PubMed Central

    Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn

    2014-01-01

    Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363

  10. Meta-Analysis of a Continuous Outcome Combining Individual Patient Data and Aggregate Data: A Method Based on Simulated Individual Patient Data

    ERIC Educational Resources Information Center

    Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.

    2014-01-01

    When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…

  11. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Bayesian adaptive trials offer advantages in comparative effectiveness trials: an example in status epilepticus.

    PubMed

    Connor, Jason T; Elm, Jordan J; Broglio, Kristine R

    2013-08-01

    We present a novel Bayesian adaptive comparative effectiveness trial comparing three treatments for status epilepticus that uses adaptive randomization with potential early stopping. The trial will enroll 720 unique patients in emergency departments and uses a Bayesian adaptive design. The trial design is compared to a trial without adaptive randomization and produces an efficient trial in which a higher proportion of patients are likely to be randomized to the most effective treatment arm while generally using fewer total patients and offers higher power than an analogous trial with fixed randomization when identifying a superior treatment. When one treatment is superior to the other two, the trial design provides better patient care, higher power, and a lower expected sample size. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Rationale of a novel study design for the BIOFLOW V study, a prospective, randomized multicenter study to assess the safety and efficacy of the Orsiro sirolimus-eluting coronary stent system using a Bayesian approach.

    PubMed

    Doros, Gheorghe; Massaro, Joseph M; Kandzari, David E; Waksman, Ron; Koolen, Jacques J; Cutlip, Donald E; Mauri, Laura

    2017-11-01

    Traditional study design submitted to the Food and Drug Administration to test newer drug-eluting stents (DES) for marketing approval is the prospective randomized controlled trial. However, several DES have extensive clinical data from trials conducted outside the United States that have led to utilization of a novel design using the Bayesian approach. This design was proposed for testing DES with bioresorbable polymer compared with DES most commonly in use today that use durable polymers for drug elution. This prospective, multicenter, randomized, controlled trial is designed to assess the safety and efficacy of the Orsiro bioresorbable polymer sirolimus-eluting stent (BP SES). Up to 1,334 subjects with up to 3 de novo or restenotic coronary artery lesions who qualify for percutaneous coronary intervention with stenting will be randomized 2:1 to the BP SES versus the Xience durable polymer everolimus-eluting stent (DP EES). Data from this trial will be combined with data from 2 similarly designed trials that also randomize subjects to BP SES and DP EES (BIOFLOW II, N=452 and BIOFLOW IV, N=579) by using a Bayesian approach. The primary end point is target lesion failure at 12 months post index procedure, defined as cardiac death, target vessel myocardial infarction, or clinically driven target lesion revascularization, and the primary analysis is a test of noninferiority of the BP SES versus DP EES on the primary end point according to a noninferiority delta of 3.85%. Secondary end points include stent thrombosis and the individual components of target lesion failure. Subjects will be followed for 5 years after randomization. The BIOFLOW V trial offers an opportunity to assess clinical outcomes in patients treated with coronary revascularization using the Orsiro BP SES relative to a commonly used DP EES. The use of a Bayesian analysis combines a large randomized cohort of patients 2 two smaller contributing randomized trials to augment the efficiency of the comparison. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A tutorial on Bayesian bivariate meta-analysis of mixed binary-continuous outcomes with missing treatment effects.

    PubMed

    Gajic-Veljanoski, Olga; Cheung, Angela M; Bayoumi, Ahmed M; Tomlinson, George

    2016-05-30

    Bivariate random-effects meta-analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random-effects meta-analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous-binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step-by-step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta-analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate-level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was 'partially' complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Bayesian methodology for the design and interpretation of clinical trials in critical care medicine: a primer for clinicians.

    PubMed

    Kalil, Andre C; Sun, Junfeng

    2014-10-01

    To review Bayesian methodology and its utility to clinical decision making and research in the critical care field. Clinical, epidemiological, and biostatistical studies on Bayesian methods in PubMed and Embase from their inception to December 2013. Bayesian methods have been extensively used by a wide range of scientific fields, including astronomy, engineering, chemistry, genetics, physics, geology, paleontology, climatology, cryptography, linguistics, ecology, and computational sciences. The application of medical knowledge in clinical research is analogous to the application of medical knowledge in clinical practice. Bedside physicians have to make most diagnostic and treatment decisions on critically ill patients every day without clear-cut evidence-based medicine (more subjective than objective evidence). Similarly, clinical researchers have to make most decisions about trial design with limited available data. Bayesian methodology allows both subjective and objective aspects of knowledge to be formally measured and transparently incorporated into the design, execution, and interpretation of clinical trials. In addition, various degrees of knowledge and several hypotheses can be tested at the same time in a single clinical trial without the risk of multiplicity. Notably, the Bayesian technology is naturally suited for the interpretation of clinical trial findings for the individualized care of critically ill patients and for the optimization of public health policies. We propose that the application of the versatile Bayesian methodology in conjunction with the conventional statistical methods is not only ripe for actual use in critical care clinical research but it is also a necessary step to maximize the performance of clinical trials and its translation to the practice of critical care medicine.

  16. Which is best for osteoporotic vertebral compression fractures: balloon kyphoplasty, percutaneous vertebroplasty or non-surgical treatment? A study protocol for a Bayesian network meta-analysis

    PubMed Central

    Kan, Shun-Li; Yuan, Zhi-Fang; Chen, Ling-Xiao; Sun, Jing-Cheng; Ning, Guang-Zhi; Feng, Shi-Qing

    2017-01-01

    Introduction Osteoporotic vertebral compression fractures (OVCFs) commonly cause both acute and chronic back pain, substantial spinal deformity, functional disability and decreased quality of life and increase the risk of future vertebral fractures and mortality. Percutaneous vertebroplasty (PVP), balloon kyphoplasty (BK) and non-surgical treatment (NST) are mostly used for the treatment of OVCFs. However, which treatment is preferred is unknown. The purpose of this study is to comprehensively review the literature and ascertain the relative efficacy and safety of BK, PVP and NST for patients with OVCFs using a Bayesian network meta-analysis. Methods and analysis We will comprehensively search PubMed, EMBASE and the Cochrane Central Register of Controlled Trials, to include randomided controlled trials that compare BK, PVP or NST for treating OVCFs. The risk of bias for individual studies will be assessed according to the Cochrane Handbook. Bayesian network meta-analysis will be performed to compare the efficacy and safety of BK, PVP and NST. The quality of evidence will be evaluated by GRADE. Ethics and dissemination Ethical approval and patient consent are not required since this study is a meta-analysis based on published studies. The results of this network meta-analysis will be submitted to a peer-reviewed journal for publication. PROSPERO registration number CRD42016039452; Pre-results. PMID:28093431

  17. Comparing energy sources for surgical ablation of atrial fibrillation: a Bayesian network meta-analysis of randomized, controlled trials.

    PubMed

    Phan, Kevin; Xie, Ashleigh; Kumar, Narendra; Wong, Sophia; Medi, Caroline; La Meir, Mark; Yan, Tristan D

    2015-08-01

    Simplified maze procedures involving radiofrequency, cryoenergy and microwave energy sources have been increasingly utilized for surgical treatment of atrial fibrillation as an alternative to the traditional cut-and-sew approach. In the absence of direct comparisons, a Bayesian network meta-analysis is another alternative to assess the relative effect of different treatments, using indirect evidence. A Bayesian meta-analysis of indirect evidence was performed using 16 published randomized trials identified from 6 databases. Rank probability analysis was used to rank each intervention in terms of their probability of having the best outcome. Sinus rhythm prevalence beyond the 12-month follow-up was similar between the cut-and-sew, microwave and radiofrequency approaches, which were all ranked better than cryoablation (respectively, 39, 36, and 25 vs 1%). The cut-and-sew maze was ranked worst in terms of mortality outcomes compared with microwave, radiofrequency and cryoenergy (2 vs 19, 34, and 24%, respectively). The cut-and-sew maze procedure was associated with significantly lower stroke rates compared with microwave ablation [odds ratio <0.01; 95% confidence interval 0.00, 0.82], and ranked the best in terms of pacemaker requirements compared with microwave, radiofrequency and cryoenergy (81 vs 14, and 1, <0.01% respectively). Bayesian rank probability analysis shows that the cut-and-sew approach is associated with the best outcomes in terms of sinus rhythm prevalence and stroke outcomes, and remains the gold standard approach for AF treatment. Given the limitations of indirect comparison analysis, these results should be viewed with caution and not over-interpreted. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  18. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  19. Toward evidence-based medical statistics: a Bayesian analysis of double-blind placebo-controlled antidepressant trials in the treatment of anxiety disorders.

    PubMed

    Monden, Rei; de Vos, Stijn; Morey, Richard; Wagenmakers, Eric-Jan; de Jonge, Peter; Roest, Annelieke M

    2016-12-01

    The Food and Drug Administration (FDA) uses a p < 0.05 null-hypothesis significance testing framework to evaluate "substantial evidence" for drug efficacy. This framework only allows dichotomous conclusions and does not quantify the strength of evidence supporting efficacy. The efficacy of FDA-approved antidepressants for the treatment of anxiety disorders was re-evaluated in a Bayesian framework that quantifies the strength of the evidence. Data from 58 double-blind placebo-controlled trials were retrieved from the FDA for the second-generation antidepressants for the treatment of anxiety disorders. Bayes factors (BFs) were calculated for all treatment arms compared to placebo and were compared with the corresponding p-values and the FDA conclusion categories. BFs ranged from 0.07 to 131,400, indicating a range of no support of evidence to strong evidence for the efficacy. Results also indicate a varying strength of evidence between the trials with p < 0.05. In sum, there were large differences in BFs across trials. Among trials providing "substantial evidence" according to the FDA, only 27 out of 59 dose groups obtained strong support for efficacy according to the typically used cutoff of BF ≥ 20. The Bayesian framework can provide valuable information on the strength of the evidence for drug efficacy. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Inhaled Cannabis for Chronic Neuropathic Pain: A Meta-analysis of Individual Patient Data.

    PubMed

    Andreae, Michael H; Carter, George M; Shaparin, Naum; Suslov, Kathryn; Ellis, Ronald J; Ware, Mark A; Abrams, Donald I; Prasad, Hannah; Wilsey, Barth; Indyk, Debbie; Johnson, Matthew; Sacks, Henry S

    2015-12-01

    Chronic neuropathic pain, the most frequent condition affecting the peripheral nervous system, remains underdiagnosed and difficult to treat. Inhaled cannabis may alleviate chronic neuropathic pain. Our objective was to synthesize the evidence on the use of inhaled cannabis for chronic neuropathic pain. We performed a systematic review and a meta-analysis of individual patient data. We registered our protocol with PROSPERO CRD42011001182. We searched in Cochrane Central, PubMed, EMBASE, and AMED. We considered all randomized controlled trials investigating chronic painful neuropathy and comparing inhaled cannabis with placebo. We pooled treatment effects following a hierarchical random-effects Bayesian responder model for the population-averaged subject-specific effect. Our evidence synthesis of individual patient data from 178 participants with 405 observed responses in 5 randomized controlled trials following patients for days to weeks provides evidence that inhaled cannabis results in short-term reductions in chronic neuropathic pain for 1 in every 5 to 6 patients treated (number needed to treat = 5.6 with a Bayesian 95% credible interval ranging between 3.4 and 14). Our inferences were insensitive to model assumptions, priors, and parameter choices. We caution that the small number of studies and participants, the short follow-up, shortcomings in allocation concealment, and considerable attrition limit the conclusions that can be drawn from the review. The Bayes factor is 332, corresponding to a posterior probability of effect of 99.7%. This novel Bayesian meta-analysis of individual patient data from 5 randomized trials suggests that inhaled cannabis may provide short-term relief for 1 in 5 to 6 patients with neuropathic pain. Pragmatic trials are needed to evaluate the long-term benefits and risks of this treatment. Copyright © 2015 American Pain Society. Published by Elsevier Inc. All rights reserved.

  1. Diagnostic accuracy of a bayesian latent group analysis for the detection of malingering-related poor effort.

    PubMed

    Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina

    2013-01-01

    In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.

  2. The impact of using informative priors in a Bayesian cost-effectiveness analysis: an application of endovascular versus open surgical repair for abdominal aortic aneurysms in high-risk patients.

    PubMed

    McCarron, C Elizabeth; Pullenayegum, Eleanor M; Thabane, Lehana; Goeree, Ron; Tarride, Jean-Eric

    2013-04-01

    Bayesian methods have been proposed as a way of synthesizing all available evidence to inform decision making. However, few practical applications of the use of Bayesian methods for combining patient-level data (i.e., trial) with additional evidence (e.g., literature) exist in the cost-effectiveness literature. The objective of this study was to compare a Bayesian cost-effectiveness analysis using informative priors to a standard non-Bayesian nonparametric method to assess the impact of incorporating additional information into a cost-effectiveness analysis. Patient-level data from a previously published nonrandomized study were analyzed using traditional nonparametric bootstrap techniques and bivariate normal Bayesian models with vague and informative priors. Two different types of informative priors were considered to reflect different valuations of the additional evidence relative to the patient-level data (i.e., "face value" and "skeptical"). The impact of using different distributions and valuations was assessed in a sensitivity analysis. Models were compared in terms of incremental net monetary benefit (INMB) and cost-effectiveness acceptability frontiers (CEAFs). The bootstrapping and Bayesian analyses using vague priors provided similar results. The most pronounced impact of incorporating the informative priors was the increase in estimated life years in the control arm relative to what was observed in the patient-level data alone. Consequently, the incremental difference in life years originally observed in the patient-level data was reduced, and the INMB and CEAF changed accordingly. The results of this study demonstrate the potential impact and importance of incorporating additional information into an analysis of patient-level data, suggesting this could alter decisions as to whether a treatment should be adopted and whether more information should be acquired.

  3. A Bayesian pick-the-winner design in a randomized phase II clinical trial.

    PubMed

    Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E

    2017-10-24

    Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.

  4. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

    PubMed Central

    Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang

    2013-01-01

    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941

  5. Bayesian approach for assessing non-inferiority in a three-arm trial with pre-specified margin.

    PubMed

    Ghosh, Samiran; Ghosh, Santu; Tiwari, Ram C

    2016-02-28

    Non-inferiority trials are becoming increasingly popular for comparative effectiveness research. However, inclusion of the placebo arm, whenever possible, gives rise to a three-arm trial which has lesser burdensome assumptions than a standard two-arm non-inferiority trial. Most of the past developments in a three-arm trial consider defining a pre-specified fraction of unknown effect size of reference drug, that is, without directly specifying a fixed non-inferiority margin. However, in some recent developments, a more direct approach is being considered with pre-specified fixed margin albeit in the frequentist setup. Bayesian paradigm provides a natural path to integrate historical and current trials' information via sequential learning. In this paper, we propose a Bayesian approach for simultaneous testing of non-inferiority and assay sensitivity in a three-arm trial with normal responses. For the experimental arm, in absence of historical information, non-informative priors are assumed under two situations, namely when (i) variance is known and (ii) variance is unknown. A Bayesian decision criteria is derived and compared with the frequentist method using simulation studies. Finally, several published clinical trial examples are reanalyzed to demonstrate the benefit of the proposed procedure. Copyright © 2015 John Wiley & Sons, Ltd.

  6. A Bayesian paradigm for decision-making in proof-of-concept trials.

    PubMed

    Pulkstenis, Erik; Patra, Kaushik; Zhang, Jianliang

    2017-01-01

    Decision-making is central to every phase of drug development, and especially at the proof of concept stage where risk and evidence must be weighed carefully, often in the presence of significant uncertainty. The decision to proceed or not to large expensive Phase 3 trials has significant implications to both patients and sponsors alike. Recent experience has shown that Phase 3 failure rates remain high. We present a flexible Bayesian quantitative decision-making paradigm that evaluates evidence relative to achieving a multilevel target product profile. A framework for operating characteristics is provided that allows the drug developer to design a proof-of-concept trial in light of its ability to support decision-making rather than merely achieve statistical significance. Operating characteristics are shown to be superior to traditional p-value-based methods. In addition, discussion related to sample size considerations, application to interim futility analysis and incorporation of prior historical information is evaluated.

  7. A Bayesian comparative effectiveness trial in action: developing a platform for multisite study adaptive randomization.

    PubMed

    Brown, Alexandra R; Gajewski, Byron J; Aaronson, Lauren S; Mudaranthakam, Dinesh Pal; Hunt, Suzanne L; Berry, Scott M; Quintana, Melanie; Pasnoor, Mamatha; Dimachkie, Mazen M; Jawdat, Omar; Herbelin, Laura; Barohn, Richard J

    2016-08-31

    In the last few decades, the number of trials using Bayesian methods has grown rapidly. Publications prior to 1990 included only three clinical trials that used Bayesian methods, but that number quickly jumped to 19 in the 1990s and to 99 from 2000 to 2012. While this literature provides many examples of Bayesian Adaptive Designs (BAD), none of the papers that are available walks the reader through the detailed process of conducting a BAD. This paper fills that gap by describing the BAD process used for one comparative effectiveness trial (Patient Assisted Intervention for Neuropathy: Comparison of Treatment in Real Life Situations) that can be generalized for use by others. A BAD was chosen with efficiency in mind. Response-adaptive randomization allows the potential for substantially smaller sample sizes, and can provide faster conclusions about which treatment or treatments are most effective. An Internet-based electronic data capture tool, which features a randomization module, facilitated data capture across study sites and an in-house computation software program was developed to implement the response-adaptive randomization. A process for adapting randomization with minimal interruption to study sites was developed. A new randomization table can be generated quickly and can be seamlessly integrated in the data capture tool with minimal interruption to study sites. This manuscript is the first to detail the technical process used to evaluate a multisite comparative effectiveness trial using adaptive randomization. An important opportunity for the application of Bayesian trials is in comparative effectiveness trials. The specific case study presented in this paper can be used as a model for conducting future clinical trials using a combination of statistical software and a web-based application. ClinicalTrials.gov Identifier: NCT02260388 , registered on 6 October 2014.

  8. Bayesian adaptive phase II screening design for combination trials.

    PubMed

    Cai, Chunyan; Yuan, Ying; Johnson, Valen E

    2013-01-01

    Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multiarm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while providing higher power to identify the best treatment at the end of the trial.

  9. Clarifying the debate on population-based screening for breast cancer with mammography

    PubMed Central

    Chen, Tony Hsiu-Hsi; Yen, Amy Ming-Fang; Fann, Jean Ching-Yuan; Gordon, Paula; Chen, Sam Li-Sheng; Chiu, Sherry Yueh-Hsia; Hsu, Chen-Yang; Chang, King-Jen; Lee, Won-Chul; Yeoh, Khay Guan; Saito, Hiroshi; Promthet, Supannee; Hamashima, Chisato; Maidin, Alimin; Robinson, Fredie; Zhao, Li-Zhong

    2017-01-01

    Abstract Background: The recent controversy about using mammography to screen for breast cancer based on randomized controlled trials over 3 decades in Western countries has not only eclipsed the paradigm of evidence-based medicine, but also puts health decision-makers in countries where breast cancer screening is still being considered in a dilemma to adopt or abandon such a well-established screening modality. Methods: We reanalyzed the empirical data from the Health Insurance Plan trial in 1963 to the UK age trial in 1991 and their follow-up data published until 2015. We first performed Bayesian conjugated meta-analyses on the heterogeneity of attendance rate, sensitivity, and over-detection and their impacts on advanced stage breast cancer and death from breast cancer across trials using Bayesian Poisson fixed- and random-effect regression model. Bayesian meta-analysis of causal model was then developed to assess a cascade of causal relationships regarding the impact of both attendance and sensitivity on 2 main outcomes. Results: The causes of heterogeneity responsible for the disparities across the trials were clearly manifested in 3 components. The attendance rate ranged from 61.3% to 90.4%. The sensitivity estimates show substantial variation from 57.26% to 87.97% but improved with time from 64% in 1963 to 82% in 1980 when Bayesian conjugated meta-analysis was conducted in chronological order. The percentage of over-detection shows a wide range from 0% to 28%, adjusting for long lead-time. The impacts of the attendance rate and sensitivity on the 2 main outcomes were statistically significant. Causal inference made by linking these causal relationships with emphasis on the heterogeneity of the attendance rate and sensitivity accounted for the variation in the reduction of advanced breast cancer (none-30%) and of mortality (none-31%). We estimated a 33% (95% CI: 24–42%) and 13% (95% CI: 6–20%) breast cancer mortality reduction for the best scenario (90% attendance rate and 95% sensitivity) and the poor scenario (30% attendance rate and 55% sensitivity), respectively. Conclusion: Elucidating the scenarios from high to low performance and learning from the experiences of these trials helps screening policy-makers contemplate on how to avoid errors made in ineffective studies and emulate the effective studies to save women lives. PMID:28099330

  10. Adaptive designs in clinical trials.

    PubMed

    Bowalekar, Suresh

    2011-01-01

    In addition to the expensive and lengthy process of developing a new medicine, the attrition rate in clinical research was on the rise, resulting in stagnation in the development of new compounds. As a consequence to this, the US Food and Drug Administration released a critical path initiative document in 2004, highlighting the need for developing innovative trial designs. One of the innovations suggested the use of adaptive designs for clinical trials. Thus, post critical path initiative, there is a growing interest in using adaptive designs for the development of pharmaceutical products. Adaptive designs are expected to have great potential to reduce the number of patients and duration of trial and to have relatively less exposure to new drug. Adaptive designs are not new in the sense that the task of interim analysis (IA)/review of the accumulated data used in adaptive designs existed in the past too. However, such reviews/analyses of accumulated data were not necessarily planned at the stage of planning clinical trial and the methods used were not necessarily compliant with clinical trial process. The Bayesian approach commonly used in adaptive designs was developed by Thomas Bayes in the 18th century, about hundred years prior to the development of modern statistical methods by the father of modern statistics, Sir Ronald A. Fisher, but the complexity involved in Bayesian approach prevented its use in real life practice. The advances in the field of computer and information technology over the last three to four decades has changed the scenario and the Bayesian techniques are being used in adaptive designs in addition to other sequential methods used in IA. This paper attempts to describe the various adaptive designs in clinical trial and views of stakeholders about feasibility of using them, without going into mathematical complexities.

  11. A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.

    PubMed

    Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F

    2016-01-01

    Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.

  12. Development of a Bayesian response-adaptive trial design for the Dexamethasone for Excessive Menstruation study.

    PubMed

    Holm Hansen, Christian; Warner, Pamela; Parker, Richard A; Walker, Brian R; Critchley, Hilary Od; Weir, Christopher J

    2017-12-01

    It is often unclear what specific adaptive trial design features lead to an efficient design which is also feasible to implement. This article describes the preparatory simulation study for a Bayesian response-adaptive dose-finding trial design. Dexamethasone for Excessive Menstruation aims to assess the efficacy of Dexamethasone in reducing excessive menstrual bleeding and to determine the best dose for further study. To maximise learning about the dose response, patients receive placebo or an active dose with randomisation probabilities adapting based on evidence from patients already recruited. The dose-response relationship is estimated using a flexible Bayesian Normal Dynamic Linear Model. Several competing design options were considered including: number of doses, proportion assigned to placebo, adaptation criterion, and number and timing of adaptations. We performed a fractional factorial study using SAS software to simulate virtual trial data for candidate adaptive designs under a variety of scenarios and to invoke WinBUGS for Bayesian model estimation. We analysed the simulated trial results using Normal linear models to estimate the effects of each design feature on empirical type I error and statistical power. Our readily-implemented approach using widely available statistical software identified a final design which performed robustly across a range of potential trial scenarios.

  13. Bayesian adaptive phase II screening design for combination trials

    PubMed Central

    Cai, Chunyan; Yuan, Ying; Johnson, Valen E

    2013-01-01

    Background Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Methods Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Results Simulation studies show that the proposed design substantially outperforms the conventional multiarm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while allocating substantially more patients to efficacious treatments. Limitations The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. Conclusions The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while providing higher power to identify the best treatment at the end of the trial. PMID:23359875

  14. Bayesian imperfect information analysis for clinical recurrent data

    PubMed Central

    Chang, Chih-Kuang; Chang, Chi-Chang

    2015-01-01

    In medical research, clinical practice must often be undertaken with imperfect information from limited resources. This study applied Bayesian imperfect information-value analysis to realistic situations to produce likelihood functions and posterior distributions, to a clinical decision-making problem for recurrent events. In this study, three kinds of failure models are considered, and our methods illustrated with an analysis of imperfect information from a trial of immunotherapy in the treatment of chronic granulomatous disease. In addition, we present evidence toward a better understanding of the differing behaviors along with concomitant variables. Based on the results of simulations, the imperfect information value of the concomitant variables was evaluated and different realistic situations were compared to see which could yield more accurate results for medical decision-making. PMID:25565853

  15. Use of Bayesian Decision Analysis to Minimize Harm in Patient-Centered Randomized Clinical Trials in Oncology.

    PubMed

    Montazerhodjat, Vahid; Chaudhuri, Shomesh E; Sargent, Daniel J; Lo, Andrew W

    2017-09-14

    Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences.

  16. Use of Bayesian Decision Analysis to Minimize Harm in Patient-Centered Randomized Clinical Trials in Oncology

    PubMed Central

    Montazerhodjat, Vahid; Chaudhuri, Shomesh E.; Sargent, Daniel J.

    2017-01-01

    Importance Randomized clinical trials (RCTs) currently apply the same statistical threshold of alpha = 2.5% for controlling for false-positive results or type 1 error, regardless of the burden of disease or patient preferences. Is there an objective and systematic framework for designing RCTs that incorporates these considerations on a case-by-case basis? Objective To apply Bayesian decision analysis (BDA) to cancer therapeutics to choose an alpha and sample size that minimize the potential harm to current and future patients under both null and alternative hypotheses. Data Sources We used the National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) database and data from the 10 clinical trials of the Alliance for Clinical Trials in Oncology. Study Selection The NCI SEER database was used because it is the most comprehensive cancer database in the United States. The Alliance trial data was used owing to the quality and breadth of data, and because of the expertise in these trials of one of us (D.J.S.). Data Extraction and Synthesis The NCI SEER and Alliance data have already been thoroughly vetted. Computations were replicated independently by 2 coauthors and reviewed by all coauthors. Main Outcomes and Measures Our prior hypothesis was that an alpha of 2.5% would not minimize the overall expected harm to current and future patients for the most deadly cancers, and that a less conservative alpha may be necessary. Our primary study outcomes involve measuring the potential harm to patients under both null and alternative hypotheses using NCI and Alliance data, and then computing BDA-optimal type 1 error rates and sample sizes for oncology RCTs. Results We computed BDA-optimal parameters for the 23 most common cancer sites using NCI data, and for the 10 Alliance clinical trials. For RCTs involving therapies for cancers with short survival times, no existing treatments, and low prevalence, the BDA-optimal type 1 error rates were much higher than the traditional 2.5%. For cancers with longer survival times, existing treatments, and high prevalence, the corresponding BDA-optimal error rates were much lower, in some cases even lower than 2.5%. Conclusions and Relevance Bayesian decision analysis is a systematic, objective, transparent, and repeatable process for deciding the outcomes of RCTs that explicitly incorporates burden of disease and patient preferences. PMID:28418507

  17. Systemic antibiotics in the treatment of aggressive periodontitis. A systematic review and a Bayesian Network meta-analysis.

    PubMed

    Rabelo, Cleverton Correa; Feres, Magda; Gonçalves, Cristiane; Figueiredo, Luciene C; Faveri, Marcelo; Tu, Yu-Kang; Chambrone, Leandro

    2015-07-01

    The aim of this study was to assess the effect of systemic antibiotic therapy on the treatment of aggressive periodontitis (AgP). This study was conducted and reported in accordance with the PRISMA statement. The MEDLINE, EMBASE and CENTRAL databases were searched up to June 2014 for randomized clinical trials comparing the treatment of subjects with AgP with either scaling and root planing (SRP) alone or associated with systemic antibiotics. Bayesian network meta-analysis was prepared using the Bayesian random-effects hierarchical models and the outcomes reported at 6-month post-treatment. Out of 350 papers identified, 14 studies were eligible. Greater gain in clinical attachment (CA) (mean difference [MD]: 1.08 mm; p < 0.0001) and reduction in probing depth (PD) (MD: 1.05 mm; p < 0.00001) were observed for SRP + metronidazole (Mtz), and for SRP + Mtz + amoxicillin (Amx) (MD: 0.45 mm, MD: 0.53 mm, respectively; p < 0.00001) than SRP alone/placebo. Bayesian network meta-analysis showed additional benefits in CA gain and PD reduction when SRP was associated with systemic antibiotics. SRP plus systemic antibiotics led to an additional clinical effect compared with SRP alone in the treatment of AgP. Of the antibiotic protocols available for inclusion into the Bayesian network meta-analysis, Mtz and Mtz/Amx provided to the most beneficial outcomes. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Enhancing pediatric clinical trial feasibility through the use of Bayesian statistics.

    PubMed

    Huff, Robin A; Maca, Jeff D; Puri, Mala; Seltzer, Earl W

    2017-11-01

    BackgroundPediatric clinical trials commonly experience recruitment challenges including limited number of patients and investigators, inclusion/exclusion criteria that further reduce the patient pool, and a competitive research landscape created by pediatric regulatory commitments. To overcome these challenges, innovative approaches are needed.MethodsThis article explores the use of Bayesian statistics to improve pediatric trial feasibility, using pediatric Type-2 diabetes as an example. Data for six therapies approved for adults were used to perform simulations to determine the impact on pediatric trial size.ResultsWhen the number of adult patients contributing to the simulation was assumed to be the same as the number of patients to be enrolled in the pediatric trial, the pediatric trial size was reduced by 75-78% when compared with a frequentist statistical approach, but was associated with a 34-45% false-positive rate. In subsequent simulations, greater control was exerted over the false-positive rate by decreasing the contribution of the adult data. A 30-33% reduction in trial size was achieved when false-positives were held to less than 10%.ConclusionReducing the trial size through the use of Bayesian statistics would facilitate completion of pediatric trials, enabling drugs to be labeled appropriately for children.

  19. Coping with Trial-to-Trial Variability of Event Related Signals: A Bayesian Inference Approach

    NASA Technical Reports Server (NTRS)

    Ding, Mingzhou; Chen, Youghong; Knuth, Kevin H.; Bressler, Steven L.; Schroeder, Charles E.

    2005-01-01

    In electro-neurophysiology, single-trial brain responses to a sensory stimulus or a motor act are commonly assumed to result from the linear superposition of a stereotypic event-related signal (e.g. the event-related potential or ERP) that is invariant across trials and some ongoing brain activity often referred to as noise. To extract the signal, one performs an ensemble average of the brain responses over many identical trials to attenuate the noise. To date, h s simple signal-plus-noise (SPN) model has been the dominant approach in cognitive neuroscience. Mounting empirical evidence has shown that the assumptions underlying this model may be overly simplistic. More realistic models have been proposed that account for the trial-to-trial variability of the event-related signal as well as the possibility of multiple differentially varying components within a given ERP waveform. The variable-signal-plus-noise (VSPN) model, which has been demonstrated to provide the foundation for separation and characterization of multiple differentially varying components, has the potential to provide a rich source of information for questions related to neural functions that complement the SPN model. Thus, being able to estimate the amplitude and latency of each ERP component on a trial-by-trial basis provides a critical link between the perceived benefits of the VSPN model and its many concrete applications. In this paper we describe a Bayesian approach to deal with this issue and the resulting strategy is referred to as the differentially Variable Component Analysis (dVCA). We compare the performance of dVCA on simulated data with Independent Component Analysis (ICA) and analyze neurobiological recordings from monkeys performing cognitive tasks.

  20. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models

    PubMed Central

    Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.

    2014-01-01

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786

  1. Bayesian approach for counting experiment statistics applied to a neutrino point source analysis

    NASA Astrophysics Data System (ADS)

    Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.

    2013-12-01

    In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.

  2. Evidence and Clinical Trials.

    NASA Astrophysics Data System (ADS)

    Goodman, Steven N.

    1989-11-01

    This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.

  3. Meta-analysis of few small studies in orphan diseases.

    PubMed

    Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat

    2017-03-01

    Meta-analyses in orphan diseases and small populations generally face particular problems, including small numbers of studies, small study sizes and heterogeneity of results. However, the heterogeneity is difficult to estimate if only very few studies are included. Motivated by a systematic review in immunosuppression following liver transplantation in children, we investigate the properties of a range of commonly used frequentist and Bayesian procedures in simulation studies. Furthermore, the consequences for interval estimation of the common treatment effect in random-effects meta-analysis are assessed. The Bayesian credibility intervals using weakly informative priors for the between-trial heterogeneity exhibited coverage probabilities in excess of the nominal level for a range of scenarios considered. However, they tended to be shorter than those obtained by the Knapp-Hartung method, which were also conservative. In contrast, methods based on normal quantiles exhibited coverages well below the nominal levels in many scenarios. With very few studies, the performance of the Bayesian credibility intervals is of course sensitive to the specification of the prior for the between-trial heterogeneity. In conclusion, the use of weakly informative priors as exemplified by half-normal priors (with a scale of 0.5 or 1.0) for log odds ratios is recommended for applications in rare diseases. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  4. Transcatheter closure, mini-invasive closure and open-heart surgical repair for treatment of perimembranous ventricular septal defects in children: a protocol for a network meta-analysis.

    PubMed

    You, Tao; Yi, Kang; Ding, Zhao-Hong; Hou, Xiao-Dong; Liu, Xing-Guang; Wang, Xin-Kuan; Ge, Long; Tian, Jin-Hui

    2017-06-21

    Both transcatheter device closure and surgical repair are effective treatments with excellent midterm outcomes for perimembranous ventricular septal defects (pmVSDs) in children. The mini-invasive periventricular device occlusion technique has become prevalent in research and application, but evidence is limited for the assessment of transcatheter closure, mini-invasive closure and open-heart surgical repair. This study comprehensively compares the efficacy, safety and costs of transcatheter closure, mini-invasive closure and open-heart surgical repair for treatment of pmVSDs in children using Bayesian network meta-analysis. A systematic search will be performed using Chinese Biomedical Literature Database, China National Knowledge Infrastructure, PubMed, EMBASE.com and the Cochrane Central Register of Controlled Trials to include random controlled trials, prospective or retrospective cohort studies comparing the efficacy, safety and costs of transcatheter closure, mini-invasive closure and open-heart surgical repair. The risk of bias for the included prospective or retrospective cohort studies will be evaluated according to the risk of bias in non-randomised studies of interventions (ROBINS-I). For random controlled trials, we will use risk of bias tool from Cochrane Handbook version 5.1.0. A Bayesian network meta-analysis will be conducted using R-3.3.2 software. Ethical approval and patient consent are not required since this study is a network meta-analysis based on published trials. The results of this network meta-analysis will be submitted to a peer-reviewed journal for publication. CRD42016053352. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Alveolar ridge preservation after tooth extraction: a Bayesian Network meta-analysis of grafting materials efficacy on prevention of bone height and width reduction.

    PubMed

    Iocca, Oreste; Farcomeni, Alessio; Pardiñas Lopez, Simon; Talib, Huzefa S

    2017-01-01

    To conduct a traditional meta-analysis and a Bayesian Network meta-analysis to synthesize the information coming from randomized controlled trials on different socket grafting materials and combine the resulting indirect evidence in order to make inferences on treatments that have not been compared directly. RCTs were identified for inclusion in the systematic review and subsequent statistical analysis. Bone height and width remodelling were selected as the chosen summary measures for comparison. First, a series of pairwise meta-analyses were performed and overall mean difference (MD) in mm with 95% CI was calculated between grafted versus non-grafted sockets. Then, a Bayesian Network meta-analysis was performed to draw indirect conclusions on which grafting materials can be considered most likely the best compared to the others. From the six included studies, seven comparisons were obtained. Traditional meta-analysis showed statistically significant results in favour of grafting the socket compared to no-graft both for height (MD 1.02, 95% CI 0.44-1.59, p value < 0.001) than for width (MD 1.52 95% CI 1.18-1.86, p value <0.000001) remodelling. Bayesian Network meta-analysis allowed to obtain a rank of intervention efficacy. On the basis of the results of the present analysis, socket grafting seems to be more favourable than unassisted socket healing. Moreover, Bayesian Network meta-analysis indicates that freeze-dried bone graft plus membrane is the most likely effective in the reduction of bone height remodelling. Autologous bone marrow resulted the most likely effective when width remodelling was considered. Studies with larger samples and less risk of bias should be conducted in the future in order to further strengthen the results of this analysis. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. A Bayesian prediction model between a biomarker and the clinical endpoint for dichotomous variables.

    PubMed

    Jiang, Zhiwei; Song, Yang; Shou, Qiong; Xia, Jielai; Wang, William

    2014-12-20

    Early biomarkers are helpful for predicting clinical endpoints and for evaluating efficacy in clinical trials even if the biomarker cannot replace clinical outcome as a surrogate. The building and evaluation of an association model between biomarkers and clinical outcomes are two equally important concerns regarding the prediction of clinical outcome. This paper is to address both issues in a Bayesian framework. A Bayesian meta-analytic approach is proposed to build a prediction model between the biomarker and clinical endpoint for dichotomous variables. Compared with other Bayesian methods, the proposed model only requires trial-level summary data of historical trials in model building. By using extensive simulations, we evaluate the link function and the application condition of the proposed Bayesian model under scenario (i) equal positive predictive value (PPV) and negative predictive value (NPV) and (ii) higher NPV and lower PPV. In the simulations, the patient-level data is generated to evaluate the meta-analytic model. PPV and NPV are employed to describe the patient-level relationship between the biomarker and the clinical outcome. The minimum number of historical trials to be included in building the model is also considered. It is seen from the simulations that the logit link function performs better than the odds and cloglog functions under both scenarios. PPV/NPV ≥0.5 for equal PPV and NPV, and PPV + NPV ≥1 for higher NPV and lower PPV are proposed in order to predict clinical outcome accurately and precisely when the proposed model is considered. Twenty historical trials are required to be included in model building when PPV and NPV are equal. For unequal PPV and NPV, the minimum number of historical trials for model building is proposed to be five. A hypothetical example shows an application of the proposed model in global drug development. The proposed Bayesian model is able to predict well the clinical endpoint from the observed biomarker data for dichotomous variables as long as the conditions are satisfied. It could be applied in drug development. But the practical problems in applications have to be studied in further research.

  7. Iterative Assessment of Statistically-Oriented and Standard Algorithms for Determining Muscle Onset with Intramuscular Electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-12-01

    The onset of muscle activity, as measured by electromyography (EMG), is a commonly applied metric in biomechanics. Intramuscular EMG is often used to examine deep musculature and there are currently no studies examining the effectiveness of algorithms for intramuscular EMG onset. The present study examines standard surface EMG onset algorithms (linear envelope, Teager-Kaiser Energy Operator, and sample entropy) and novel algorithms (time series mean-variance analysis, sequential/batch processing with parametric and nonparametric methods, and Bayesian changepoint analysis). Thirteen male and 5 female subjects had intramuscular EMG collected during isolated biceps brachii and vastus lateralis contractions, resulting in 103 trials. EMG onset was visually determined twice by 3 blinded reviewers. Since the reliability of visual onset was high (ICC (1,1) : 0.92), the mean of the 6 visual assessments was contrasted with the algorithmic approaches. Poorly performing algorithms were stepwise eliminated via (1) root mean square error analysis, (2) algorithm failure to identify onset/premature onset, (3) linear regression analysis, and (4) Bland-Altman plots. The top performing algorithms were all based on Bayesian changepoint analysis of rectified EMG and were statistically indistinguishable from visual analysis. Bayesian changepoint analysis has the potential to produce more reliable, accurate, and objective intramuscular EMG onset results than standard methodologies.

  8. BOP2: Bayesian optimal design for phase II clinical trials with simple and complex endpoints.

    PubMed

    Zhou, Heng; Lee, J Jack; Yuan, Ying

    2017-09-20

    We propose a flexible Bayesian optimal phase II (BOP2) design that is capable of handling simple (e.g., binary) and complicated (e.g., ordinal, nested, and co-primary) endpoints under a unified framework. We use a Dirichlet-multinomial model to accommodate different types of endpoints. At each interim, the go/no-go decision is made by evaluating a set of posterior probabilities of the events of interest, which is optimized to maximize power or minimize the number of patients under the null hypothesis. Unlike other existing Bayesian designs, the BOP2 design explicitly controls the type I error rate, thereby bridging the gap between Bayesian designs and frequentist designs. In addition, the stopping boundary of the BOP2 design can be enumerated prior to the onset of the trial. These features make the BOP2 design accessible to a wide range of users and regulatory agencies and particularly easy to implement in practice. Simulation studies show that the BOP2 design has favorable operating characteristics with higher power and lower risk of incorrectly terminating the trial than some existing Bayesian phase II designs. The software to implement the BOP2 design is freely available at www.trialdesign.org. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Bayesian predictive power: choice of prior and some recommendations for its use as probability of success in drug development.

    PubMed

    Rufibach, Kaspar; Burger, Hans Ulrich; Abt, Markus

    2016-09-01

    Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u-shape very similar, but not equal, to a β-distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  11. Bayesian Design of Superiority Clinical Trials for Recurrent Events Data with Applications to Bleeding and Transfusion Events in Myelodyplastic Syndrome

    PubMed Central

    Chen, Ming-Hui; Zeng, Donglin; Hu, Kuolung; Jia, Catherine

    2014-01-01

    Summary In many biomedical studies, patients may experience the same type of recurrent event repeatedly over time, such as bleeding, multiple infections and disease. In this article, we propose a Bayesian design to a pivotal clinical trial in which lower risk myelodysplastic syndromes (MDS) patients are treated with MDS disease modifying therapies. One of the key study objectives is to demonstrate the investigational product (treatment) effect on reduction of platelet transfusion and bleeding events while receiving MDS therapies. In this context, we propose a new Bayesian approach for the design of superiority clinical trials using recurrent events frailty regression models. Historical recurrent events data from an already completed phase 2 trial are incorporated into the Bayesian design via the partial borrowing power prior of Ibrahim et al. (2012, Biometrics 68, 578–586). An efficient Gibbs sampling algorithm, a predictive data generation algorithm, and a simulation-based algorithm are developed for sampling from the fitting posterior distribution, generating the predictive recurrent events data, and computing various design quantities such as the type I error rate and power, respectively. An extensive simulation study is conducted to compare the proposed method to the existing frequentist methods and to investigate various operating characteristics of the proposed design. PMID:25041037

  12. A Calibrated Power Prior Approach to Borrow Information from Historical Data with Application to Biosimilar Clinical Trials.

    PubMed

    Pan, Haitao; Yuan, Ying; Xia, Jielai

    2017-11-01

    A biosimilar refers to a follow-on biologic intended to be approved for marketing based on biosimilarity to an existing patented biological product (i.e., the reference product). To develop a biosimilar product, it is essential to demonstrate biosimilarity between the follow-on biologic and the reference product, typically through two-arm randomization trials. We propose a Bayesian adaptive design for trials to evaluate biosimilar products. To take advantage of the abundant historical data on the efficacy of the reference product that is typically available at the time a biosimilar product is developed, we propose the calibrated power prior, which allows our design to adaptively borrow information from the historical data according to the congruence between the historical data and the new data collected from the current trial. We propose a new measure, the Bayesian biosimilarity index, to measure the similarity between the biosimilar and the reference product. During the trial, we evaluate the Bayesian biosimilarity index in a group sequential fashion based on the accumulating interim data, and stop the trial early once there is enough information to conclude or reject the similarity. Extensive simulation studies show that the proposed design has higher power than traditional designs. We applied the proposed design to a biosimilar trial for treating rheumatoid arthritis.

  13. Design considerations and analysis planning of a phase 2a proof of concept study in rheumatoid arthritis in the presence of possible non-monotonicity.

    PubMed

    Liu, Feng; Walters, Stephen J; Julious, Steven A

    2017-10-02

    It is important to quantify the dose response for a drug in phase 2a clinical trials so the optimal doses can then be selected for subsequent late phase trials. In a phase 2a clinical trial of new lead drug being developed for the treatment of rheumatoid arthritis (RA), a U-shaped dose response curve was observed. In the light of this result further research was undertaken to design an efficient phase 2a proof of concept (PoC) trial for a follow-on compound using the lessons learnt from the lead compound. The planned analysis for the Phase 2a trial for GSK123456 was a Bayesian Emax model which assumes the dose-response relationship follows a monotonic sigmoid "S" shaped curve. This model was found to be suboptimal to model the U-shaped dose response observed in the data from this trial and alternatives approaches were needed to be considered for the next compound for which a Normal dynamic linear model (NDLM) is proposed. This paper compares the statistical properties of the Bayesian Emax model and NDLM model and both models are evaluated using simulation in the context of adaptive Phase 2a PoC design under a variety of assumed dose response curves: linear, Emax model, U-shaped model, and flat response. It is shown that the NDLM method is flexible and can handle a wide variety of dose-responses, including monotonic and non-monotonic relationships. In comparison to the NDLM model the Emax model excelled with higher probability of selecting ED90 and smaller average sample size, when the true dose response followed Emax like curve. In addition, the type I error, probability of incorrectly concluding a drug may work when it does not, is inflated with the Bayesian NDLM model in all scenarios which would represent a development risk to pharmaceutical company. The bias, which is the difference between the estimated effect from the Emax and NDLM models and the simulated value, is comparable if the true dose response follows a placebo like curve, an Emax like curve, or log linear shape curve under fixed dose allocation, no adaptive allocation, half adaptive and adaptive scenarios. The bias though is significantly increased for the Emax model if the true dose response follows a U-shaped curve. In most cases the Bayesian Emax model works effectively and efficiently, with low bias and good probability of success in case of monotonic dose response. However, if there is a belief that the dose response could be non-monotonic then the NDLM is the superior model to assess the dose response.

  14. Quantifying model-structure- and parameter-driven uncertainties in spring wheat phenology prediction with Bayesian analysis

    DOE PAGES

    Alderman, Phillip D.; Stanfill, Bryan

    2016-10-06

    Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less

  15. Significance testing of clinical data using virus dynamics models with a Markov chain Monte Carlo method: application to emergence of lamivudine-resistant hepatitis B virus.

    PubMed Central

    Burroughs, N J; Pillay, D; Mutimer, D

    1999-01-01

    Bayesian analysis using a virus dynamics model is demonstrated to facilitate hypothesis testing of patterns in clinical time-series. Our Markov chain Monte Carlo implementation demonstrates that the viraemia time-series observed in two sets of hepatitis B patients on antiviral (lamivudine) therapy, chronic carriers and liver transplant patients, are significantly different, overcoming clinical trial design differences that question the validity of non-parametric tests. We show that lamivudine-resistant mutants grow faster in transplant patients than in chronic carriers, which probably explains the differences in emergence times and failure rates between these two sets of patients. Incorporation of dynamic models into Bayesian parameter analysis is of general applicability in medical statistics. PMID:10643081

  16. A Bayesian approach to the statistical analysis of device preference studies.

    PubMed

    Fu, Haoda; Qu, Yongming; Zhu, Baojin; Huster, William

    2012-01-01

    Drug delivery devices are required to have excellent technical specifications to deliver drugs accurately, and in addition, the devices should provide a satisfactory experience to patients because this can have a direct effect on drug compliance. To compare patients' experience with two devices, cross-over studies with patient-reported outcomes (PRO) as response variables are often used. Because of the strength of cross-over designs, each subject can directly compare the two devices by using the PRO variables, and variables indicating preference (preferring A, preferring B, or no preference) can be easily derived. Traditionally, methods based on frequentist statistics can be used to analyze such preference data, but there are some limitations for the frequentist methods. Recently, Bayesian methods are considered an acceptable method by the US Food and Drug Administration to design and analyze device studies. In this paper, we propose a Bayesian statistical method to analyze the data from preference trials. We demonstrate that the new Bayesian estimator enjoys some optimal properties versus the frequentist estimator. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Adaptability and phenotypic stability of common bean genotypes through Bayesian inference.

    PubMed

    Corrêa, A M; Teodoro, P E; Gonçalves, M C; Barroso, L M A; Nascimento, M; Santos, A; Torres, F E

    2016-04-27

    This study used Bayesian inference to investigate the genotype x environment interaction in common bean grown in Mato Grosso do Sul State, and it also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 13 common bean genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian inference was effective for the selection of upright common bean genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. According to Bayesian inference, the EMGOPA-201, BAMBUÍ, CNF 4999, CNF 4129 A 54, and CNFv 8025 genotypes had specific adaptability to favorable environments, while the IAPAR 14 and IAC CARIOCA ETE genotypes had specific adaptability to unfavorable environments.

  18. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  19. Bayesian selective response-adaptive design using the historical control.

    PubMed

    Kim, Mi-Ok; Harun, Nusrat; Liu, Chunyan; Khoury, Jane C; Broderick, Joseph P

    2018-06-13

    High quality historical control data, if incorporated, may reduce sample size, trial cost, and duration. A too optimistic use of the data, however, may result in bias under prior-data conflict. Motivated by well-publicized two-arm comparative trials in stroke, we propose a Bayesian design that both adaptively incorporates historical control data and selectively adapt the treatment allocation ratios within an ongoing trial responsively to the relative treatment effects. The proposed design differs from existing designs that borrow from historical controls. As opposed to reducing the number of subjects assigned to the control arm blindly, this design does so adaptively to the relative treatment effects only if evaluation of cumulated current trial data combined with the historical control suggests the superiority of the intervention arm. We used the effective historical sample size approach to quantify borrowed information on the control arm and modified the treatment allocation rules of the doubly adaptive biased coin design to incorporate the quantity. The modified allocation rules were then implemented under the Bayesian framework with commensurate priors addressing prior-data conflict. Trials were also more frequently concluded earlier in line with the underlying truth, reducing trial cost, and duration and yielded parameter estimates with smaller standard errors. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons, Ltd.

  20. Bayesian multivariate hierarchical transformation models for ROC analysis.

    PubMed

    O'Malley, A James; Zou, Kelly H

    2006-02-15

    A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.

  1. Bayesian multivariate hierarchical transformation models for ROC analysis

    PubMed Central

    O'Malley, A. James; Zou, Kelly H.

    2006-01-01

    SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836

  2. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Bayesian network meta-analysis for cluster randomized trials with binary outcomes.

    PubMed

    Uhlmann, Lorenz; Jensen, Katrin; Kieser, Meinhard

    2017-06-01

    Network meta-analysis is becoming a common approach to combine direct and indirect comparisons of several treatment arms. In recent research, there have been various developments and extensions of the standard methodology. Simultaneously, cluster randomized trials are experiencing an increased popularity, especially in the field of health services research, where, for example, medical practices are the units of randomization but the outcome is measured at the patient level. Combination of the results of cluster randomized trials is challenging. In this tutorial, we examine and compare different approaches for the incorporation of cluster randomized trials in a (network) meta-analysis. Furthermore, we provide practical insight on the implementation of the models. In simulation studies, it is shown that some of the examined approaches lead to unsatisfying results. However, there are alternatives which are suitable to combine cluster randomized trials in a network meta-analysis as they are unbiased and reach accurate coverage rates. In conclusion, the methodology can be extended in such a way that an adequate inclusion of the results obtained in cluster randomized trials becomes feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. A Bayesian predictive two-stage design for phase II clinical trials.

    PubMed

    Sambucini, Valeria

    2008-04-15

    In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.

  5. Pathfinding to an optimal strategy of revascularization in primary coronary intervention in patients with multivessel disease: a network meta-analysis of randomized trials.

    PubMed

    Komócsi, András; Kehl, Dániel; d'Ascenso, Fabrizio; DiNicolantonio, James; Vorobcsuk, András

    2017-03-01

    In ST-segment elevation myocardial infarction (STEMI), current guidelines discourage treatment of the non-culprit lesions at the time of the primary intervention. Latest trials have challenged this strategy suggesting benefit of early complete revascularization. We performed a Bayesian multiple treatment network meta-analysis of randomized clinical trials (RCTs) in STEMI on culprit-only intervention (CO) versus different timing multivessel revascularization, including immediate (IM), same hospitalization (SH) or later staged (ST). Outcome parameters were pooled with a random-effects model. For multiple-treatment meta-analysis, a Bayesian Markov chain Monte Carlo method was used. Eight RCTs involving 2077 patients were identified. ST and IM revascularization was associated with a decrease in major adverse cardiac events (MACEs) compared to culprit-only approach (risk ratio [RR]: 0.43 credible interval [CrI]: 0.22-0.77 and RR: 0.36 CrI: 0.24-0.54, respectively). IM was superior to SH (RR: 0.49 CrI: 0.29-0.80). With regards to myocardial infarction IM was superior to SH (RR: 0.18 CrI: 0.02-0.99). The posterior probability of being the best choice of treatment regarding the frequency of MACEs was 71.2% for IM, 28.5% for ST, 0.3% for SH and 0.05% for culprit-only approach. Results from RCTs indicate that immediate or staged revascularization of non-culprit lesions reduces major adverse events in patients after primary percutaneous coronary intervention. Differences in MACEs suggest superiority of the immediate or staged intervention; however, further randomized trials are needed to determine the optimal timing of revascularization of the non-culprit lesions.

  6. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  7. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    PubMed

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  8. Bayesian data analysis for newcomers.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  9. Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.

    PubMed

    Gajewski, Byron J; Mayo, Matthew S

    2006-08-15

    A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.

  10. Bronchoscopic lung-volume reduction with Exhale airway stents for emphysema (EASE trial): randomised, sham-controlled, multicentre trial.

    PubMed

    Shah, P L; Slebos, D-J; Cardoso, P F G; Cetti, E; Voelker, K; Levine, B; Russell, M E; Goldin, J; Brown, M; Cooper, J D; Sybrecht, G W

    2011-09-10

    Airway bypass is a bronchoscopic lung-volume reduction procedure for emphysema whereby transbronchial passages into the lung are created to release trapped air, supported with paclitaxel-coated stents to ease the mechanics of breathing. The aim of the EASE (Exhale airway stents for emphysema) trial was to evaluate safety and efficacy of airway bypass in people with severe homogeneous emphysema. We undertook a randomised, double-blind, sham-controlled study in 38 specialist respiratory centres worldwide. We recruited 315 patients who had severe hyperinflation (ratio of residual volume [RV] to total lung capacity of ≥0·65). By computer using a random number generator, we randomly allocated participants (in a 2:1 ratio) to either airway bypass (n=208) or sham control (107). We divided investigators into team A (masked), who completed pre-procedure and post-procedure assessments, and team B (unmasked), who only did bronchoscopies without further interaction with patients. Participants were followed up for 12 months. The 6-month co-primary efficacy endpoint required 12% or greater improvement in forced vital capacity (FVC) and 1 point or greater decrease in the modified Medical Research Council dyspnoea score from baseline. The composite primary safety endpoint incorporated five severe adverse events. We did Bayesian analysis to show the posterior probability that airway bypass was superior to sham control (success threshold, 0·965). Analysis was by intention to treat. This study is registered with ClinicalTrials.gov, number NCT00391612. All recruited patients were included in the analysis. At 6 months, no difference between treatment arms was noted with respect to the co-primary efficacy endpoint (30 of 208 for airway bypass vs 12 of 107 for sham control; posterior probability 0·749, below the Bayesian success threshold of 0·965). The 6-month composite primary safety endpoint was 14·4% (30 of 208) for airway bypass versus 11·2% (12 of 107) for sham control (judged non-inferior, with a posterior probability of 1·00 [Bayesian success threshold >0·95]). Although our findings showed safety and transient improvements, no sustainable benefit was recorded with airway bypass in patients with severe homogeneous emphysema. Broncus Technologies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Risk analysis of new oral anticoagulants for gastrointestinal bleeding and intracranial hemorrhage in atrial fibrillation patients: a systematic review and network meta-analysis.

    PubMed

    Xu, Wei-Wei; Hu, Shen-Jiang; Wu, Tao

    2017-07-01

    Antithrombotic therapy using new oral anticoagulants (NOACs) in patients with atrial fibrillation (AF) has been generally shown to have a favorable risk-benefit profile. Since there has been dispute about the risks of gastrointestinal bleeding (GIB) and intracranial hemorrhage (ICH), we sought to conduct a systematic review and network meta-analysis using Bayesian inference to analyze the risks of GIB and ICH in AF patients taking NOACs. We analyzed data from 20 randomized controlled trials of 91 671 AF patients receiving anticoagulants, antiplatelet drugs, or placebo. Bayesian network meta-analysis of two different evidence networks was performed using a binomial likelihood model, based on a network in which different agents (and doses) were treated as separate nodes. Odds ratios (ORs) and 95% confidence intervals (CIs) were modeled using Markov chain Monte Carlo methods. Indirect comparisons with the Bayesian model confirmed that aspirin+clopidogrel significantly increased the risk of GIB in AF patients compared to the placebo (OR 0.33, 95% CI 0.01-0.92). Warfarin was identified as greatly increasing the risk of ICH compared to edoxaban 30 mg (OR 3.42, 95% CI 1.22-7.24) and dabigatran 110 mg (OR 3.56, 95% CI 1.10-8.45). We further ranked the NOACs for the lowest risk of GIB (apixaban 5 mg) and ICH (apixaban 5 mg, dabigatran 110 mg, and edoxaban 30 mg). Bayesian network meta-analysis of treatment of non-valvular AF patients with anticoagulants suggested that NOACs do not increase risks of GIB and/or ICH, compared to each other.

  12. Do Bayesian adaptive trials offer advantages for comparative effectiveness research? Protocol for the RE-ADAPT study

    PubMed Central

    Luce, Bryan R; Broglio, Kristine R; Ishak, K Jack; Mullins, C Daniel; Vanness, David J; Fleurence, Rachael; Saunders, Elijah; Davis, Barry R

    2013-01-01

    Background Randomized clinical trials, particularly for comparative effectiveness research (CER), are frequently criticized for being overly restrictive or untimely for health-care decision making. Purpose Our prospectively designed REsearch in ADAptive methods for Pragmatic Trials (RE-ADAPT) study is a ‘proof of concept’ to stimulate investment in Bayesian adaptive designs for future CER trials. Methods We will assess whether Bayesian adaptive designs offer potential efficiencies in CER by simulating a re-execution of the Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) study using actual data from ALLHAT. Results We prospectively define seven alternate designs consisting of various combinations of arm dropping, adaptive randomization, and early stopping and describe how these designs will be compared to the original ALLHAT design. We identify the one particular design that would have been executed, which incorporates early stopping and information-based adaptive randomization. Limitations While the simulation realistically emulates patient enrollment, interim analyses, and adaptive changes to design, it cannot incorporate key features like the involvement of data monitoring committee in making decisions about adaptive changes. Conclusion This article describes our analytic approach for RE-ADAPT. The next stage of the project is to conduct the re-execution analyses using the seven prespecified designs and the original ALLHAT data. PMID:23983160

  13. Bayesian analysis of a mastitis control plan to investigate the influence of veterinary prior beliefs on clinical interpretation.

    PubMed

    Green, M J; Browne, W J; Green, L E; Bradley, A J; Leach, K A; Breen, J E; Medley, G F

    2009-10-01

    The fundamental objective for health research is to determine whether changes should be made to clinical decisions. Decisions made by veterinary surgeons in the light of new research evidence are known to be influenced by their prior beliefs, especially their initial opinions about the plausibility of possible results. In this paper, clinical trial results for a bovine mastitis control plan were evaluated within a Bayesian context, to incorporate a community of prior distributions that represented a spectrum of clinical prior beliefs. The aim was to quantify the effect of veterinary surgeons' initial viewpoints on the interpretation of the trial results. A Bayesian analysis was conducted using Markov chain Monte Carlo procedures. Stochastic models included a financial cost attributed to a change in clinical mastitis following implementation of the control plan. Prior distributions were incorporated that covered a realistic range of possible clinical viewpoints, including scepticism, enthusiasm and uncertainty. Posterior distributions revealed important differences in the financial gain that clinicians with different starting viewpoints would anticipate from the mastitis control plan, given the actual research results. For example, a severe skeptic would ascribe a probability of 0.50 for a return of < 5 UK pounds per cow in an average herd that implemented the plan, whereas an enthusiast would ascribe this probability for a return of > 20 UK pounds per cow. Simulations using increased trial sizes indicated that if the original study was four times as large, an initial skeptic would be more convinced about the efficacy of the control plan but would still anticipate less financial return than an initial enthusiast would anticipate after the original study. In conclusion, it is possible to estimate how clinicians' prior beliefs influence their interpretation of research evidence. Further research on the extent to which different interpretations of evidence result in changes to clinical practice would be worthwhile.

  14. Cervical disc arthroplasty for symptomatic cervical disc disease: Traditional and Bayesian meta-analysis with trial sequential analysis.

    PubMed

    Kan, Shun-Li; Yuan, Zhi-Fang; Ning, Guang-Zhi; Liu, Fei-Fei; Sun, Jing-Cheng; Feng, Shi-Qing

    2016-11-01

    Cervical disc arthroplasty (CDA) has been designed as a substitute for anterior cervical discectomy and fusion (ACDF) in the treatment of symptomatic cervical disc disease (CDD). Several researchers have compared CDA with ACDF for the treatment of symptomatic CDD; however, the findings of these studies are inconclusive. Using recently published evidence, this meta-analysis was conducted to further verify the benefits and harms of using CDA for treatment of symptomatic CDD. Relevant trials were identified by searching the PubMed, EMBASE, and Cochrane Library databases. Outcomes were reported as odds ratio or standardized mean difference. Both traditional frequentist and Bayesian approaches were used to synthesize evidence within random-effects models. Trial sequential analysis (TSA) was applied to test the robustness of our findings and obtain more conservative estimates. Nineteen trials were included. The findings of this meta-analysis demonstrated better overall, neck disability index (NDI), and neurological success; lower NDI and neck and arm pain scores; higher 36-Item Short Form Health Survey (SF-36) Physical Component Summary (PCS) and Mental Component Summary (MCS) scores; more patient satisfaction; greater range of motion at the operative level; and fewer secondary surgical procedures (all P < 0.05) in the CDA group compared with the ACDF group. CDA was not significantly different from ACDF in the rate of adverse events (P > 0.05). TSA of overall success suggested that the cumulative z-curve crossed both the conventional boundary and the trial sequential monitoring boundary for benefit, indicating sufficient and conclusive evidence had been ascertained. For treating symptomatic CDD, CDA was superior to ACDF in terms of overall, NDI, and neurological success; NDI and neck and arm pain scores; SF-36 PCS and MCS scores; patient satisfaction; ROM at the operative level; and secondary surgical procedures rate. Additionally, there was no significant difference between CDA and ACDF in the rate of adverse events. However, as the CDA procedure is a relatively newer operative technique, long-term results and evaluation are necessary before CDA is routinely used in clinical practice. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  15. Bayesian statistics in medicine: a 25 year review.

    PubMed

    Ashby, Deborah

    2006-11-15

    This review examines the state of Bayesian thinking as Statistics in Medicine was launched in 1982, reflecting particularly on its applicability and uses in medical research. It then looks at each subsequent five-year epoch, with a focus on papers appearing in Statistics in Medicine, putting these in the context of major developments in Bayesian thinking and computation with reference to important books, landmark meetings and seminal papers. It charts the growth of Bayesian statistics as it is applied to medicine and makes predictions for the future. From sparse beginnings, where Bayesian statistics was barely mentioned, Bayesian statistics has now permeated all the major areas of medical statistics, including clinical trials, epidemiology, meta-analyses and evidence synthesis, spatial modelling, longitudinal modelling, survival modelling, molecular genetics and decision-making in respect of new technologies.

  16. Bayesian methods for the design and interpretation of clinical trials in very rare diseases

    PubMed Central

    Hampson, Lisa V; Whitehead, John; Eleftheriou, Despina; Brogan, Paul

    2014-01-01

    This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare-disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non-inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24957522

  17. Hierarchical Commensurate and Power Prior Models for Adaptive Incorporation of Historical Information in Clinical Trials

    PubMed Central

    Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.

    2011-01-01

    Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892

  18. Minimally Invasive Extracorporeal Circulation Circuit Is Not Inferior to Off-Pump Coronary Artery Bypass Grafting: Meta-Analysis Using the Bayesian Method.

    PubMed

    Winkler, Bernhard; Heinisch, Paul P; Gahl, Brigitta; Aghlmandi, Soheila; Jenni, Hans Jörg; Carrel, Thierry P

    2017-01-01

    The pathophysiologic side effects of cardiopulmonary bypass have already been identified. Minimally invasive extracorporeal circulation technologies (MiECT) and off-pump coronary artery bypass graft surgery (OPCABG) aim to reduce these problems. This meta-analysis provides a comparison of MiECT and OPCABG in randomized and observational studies. A fully probabilistic, Bayesian approach of primary and secondary endpoints was conducted. MiECT does not give inferior results when compared with OPCABG. However, there is a trend to borderline significantly higher blood loss in this group in randomized controlled trials. The question whether MiECT is equivalent to OPCABG can be answered with the affirmative, but long-term follow-up data are needed to detect any advantage over time. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Accounting for correlation in network meta-analysis with multi-arm trials.

    PubMed

    Franchini, A J; Dias, S; Ades, A E; Jansen, J P; Welton, N J

    2012-06-01

    Multi-arm trials (trials with more than two arms) are particularly valuable forms of evidence for network meta-analysis (NMA). Trial results are available either as arm-level summaries, where effect measures are reported for each arm, or as contrast-level summaries, where the differences in effect between arms compare with the control arm chosen for the trial. We show that likelihood-based inference in both contrast-level and arm-level formats is identical if there are only two-arm trials, but that if there are multi-arm trials, results from the contrast-level format will be incorrect unless correlations are accounted for in the likelihood. We review Bayesian and frequentist software for NMA with multi-arm trials that can account for this correlation and give an illustrative example of the difference in estimates that can be introduced if the correlations are not incorporated. We discuss methods of imputing correlations when they cannot be derived from the reported results and urge trialists to report the standard error for the control arm even if only contrast-level summaries are reported. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Viral Linkage in HIV-1 Seroconverters and Their Partners in an HIV-1 Prevention Clinical Trial

    PubMed Central

    Campbell, Mary S.; Mullins, James I.; Hughes, James P.; Celum, Connie; Wong, Kim G.; Raugi, Dana N.; Sorensen, Stefanie; Stoddard, Julia N.; Zhao, Hong; Deng, Wenjie; Kahle, Erin; Panteleeff, Dana; Baeten, Jared M.; McCutchan, Francine E.; Albert, Jan; Leitner, Thomas; Wald, Anna; Corey, Lawrence; Lingappa, Jairam R.

    2011-01-01

    Background Characterization of viruses in HIV-1 transmission pairs will help identify biological determinants of infectiousness and evaluate candidate interventions to reduce transmission. Although HIV-1 sequencing is frequently used to substantiate linkage between newly HIV-1 infected individuals and their sexual partners in epidemiologic and forensic studies, viral sequencing is seldom applied in HIV-1 prevention trials. The Partners in Prevention HSV/HIV Transmission Study (ClinicalTrials.gov #NCT00194519) was a prospective randomized placebo-controlled trial that enrolled serodiscordant heterosexual couples to determine the efficacy of genital herpes suppression in reducing HIV-1 transmission; as part of the study analysis, HIV-1 sequences were examined for genetic linkage between seroconverters and their enrolled partners. Methodology/Principal Findings We obtained partial consensus HIV-1 env and gag sequences from blood plasma for 151 transmission pairs and performed deep sequencing of env in some cases. We analyzed sequences with phylogenetic techniques and developed a Bayesian algorithm to evaluate the probability of linkage. For linkage, we required monophyletic clustering between enrolled partners' sequences and a Bayesian posterior probability of ≥50%. Adjudicators classified each seroconversion, finding 108 (71.5%) linked, 40 (26.5%) unlinked, and 3 (2.0%) indeterminate transmissions, with linkage determined by consensus env sequencing in 91 (84%). Male seroconverters had a higher frequency of unlinked transmissions than female seroconverters. The likelihood of transmission from the enrolled partner was related to time on study, with increasing numbers of unlinked transmissions occurring after longer observation periods. Finally, baseline viral load was found to be significantly higher among linked transmitters. Conclusions/Significance In this first use of HIV-1 sequencing to establish endpoints in a large clinical trial, more than one-fourth of transmissions were unlinked to the enrolled partner, illustrating the relevance of these methods in the design of future HIV-1 prevention trials in serodiscordant couples. A hierarchy of sequencing techniques, analysis methods, and expert adjudication contributed to the linkage determination process. PMID:21399681

  1. Bayes to the Rescue: Continuous Positive Airway Pressure Has Less Mortality Than High-Flow Oxygen.

    PubMed

    Modesto I Alapont, Vicent; Khemani, Robinder G; Medina, Alberto; Del Villar Guerra, Pablo; Molina Cambra, Alfred

    2017-02-01

    The merits of high-flow nasal cannula oxygen versus bubble continuous positive airway pressure are debated in children with pneumonia, with suggestions that randomized controlled trials are needed. In light of a previous randomized controlled trial showing a trend for lower mortality with bubble continuous positive airway pressure, we sought to determine the probability that a new randomized controlled trial would find high-flow nasal cannula oxygen superior to bubble continuous positive airway pressure through a "robust" Bayesian analysis. Sample data were extracted from the trial by Chisti et al, and requisite to "robust" Bayesian analysis, we specified three prior distributions to represent clinically meaningful assumptions. These priors (reference, pessimistic, and optimistic) were used to generate three scenarios to represent the range of possible hypotheses. 1) "Reference": we believe bubble continuous positive airway pressure and high-flow nasal cannula oxygen are equally effective with the same uninformative reference priors; 2) "Sceptic on high-flow nasal cannula oxygen": we believe that bubble continuous positive airway pressure is better than high-flow nasal cannula oxygen (bubble continuous positive airway pressure has an optimistic prior and high-flow nasal cannula oxygen has a pessimistic prior); and 3) "Enthusiastic on high-flow nasal cannula oxygen": we believe that high-flow nasal cannula oxygen is better than bubble continuous positive airway pressure (high-flow nasal cannula oxygen has an optimistic prior and bubble continuous positive airway pressure has a pessimistic prior). Finally, posterior empiric Bayesian distributions were obtained through 100,000 Markov Chain Monte Carlo simulations. In all three scenarios, there was a high probability for more death from high-flow nasal cannula oxygen compared with bubble continuous positive airway pressure (reference, 0.98; sceptic on high-flow nasal cannula oxygen, 0.982; enthusiastic on high-flow nasal cannula oxygen, 0.742). The posterior 95% credible interval on the difference in mortality identified a future randomized controlled trial would be extremely unlikely to find a mortality benefit for high-flow nasal cannula oxygen over bubble continuous positive airway pressure, regardless of the scenario. Interpreting these findings using the "range of practical equivalence" framework would recommend rejecting the hypothesis that high-flow nasal cannula oxygen is superior to bubble continuous positive airway pressure for these children. For children younger than 5 years with pneumonia, high-flow nasal cannula oxygen has higher mortality than bubble continuous positive airway pressure. A future randomized controlled trial in this population is unlikely to find high-flow nasal cannula oxygen superior to bubble continuous positive airway pressure.

  2. Multivariable and Bayesian Network Analysis of Outcome Predictors in Acute Aneurysmal Subarachnoid Hemorrhage: Review of a Pure Surgical Series in the Post-International Subarachnoid Aneurysm Trial Era.

    PubMed

    Zador, Zsolt; Huang, Wendy; Sperrin, Matthew; Lawton, Michael T

    2018-06-01

    Following the International Subarachnoid Aneurysm Trial (ISAT), evolving treatment modalities for acute aneurysmal subarachnoid hemorrhage (aSAH) has changed the case mix of patients undergoing urgent surgical clipping. To update our knowledge on outcome predictors by analyzing admission parameters in a pure surgical series using variable importance ranking and machine learning. We reviewed a single surgeon's case series of 226 patients suffering from aSAH treated with urgent surgical clipping. Predictions were made using logistic regression models, and predictive performance was assessed using areas under the receiver operating curve (AUC). We established variable importance ranking using partial Nagelkerke R2 scores. Probabilistic associations between variables were depicted using Bayesian networks, a method of machine learning. Importance ranking showed that World Federation of Neurosurgical Societies (WFNS) grade and age were the most influential outcome prognosticators. Inclusion of only these 2 predictors was sufficient to maintain model performance compared to when all variables were considered (AUC = 0.8222, 95% confidence interval (CI): 0.7646-0.88 vs 0.8218, 95% CI: 0.7616-0.8821, respectively, DeLong's P = .992). Bayesian networks showed that age and WFNS grade were associated with several variables such as laboratory results and cardiorespiratory parameters. Our study is the first to report early outcomes and formal predictor importance ranking following aSAH in a post-ISAT surgical case series. Models showed good predictive power with fewer relevant predictors than in similar size series. Bayesian networks proved to be a powerful tool in visualizing the widespread association of the 2 key predictors with admission variables, explaining their importance and demonstrating the potential for hypothesis generation.

  3. Bayesian probability of success for clinical trials using historical data

    PubMed Central

    Ibrahim, Joseph G.; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F.; Heyse, Joseph F.

    2015-01-01

    Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein’s work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. PMID:25339499

  4. Bayesian probability of success for clinical trials using historical data.

    PubMed

    Ibrahim, Joseph G; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F; Heyse, Joseph F

    2015-01-30

    Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein's work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Long term outcomes of new generation drug eluting stents versus coronary artery bypass grafting for multivessel and/or left main coronary artery disease. A Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Mina, George S; Watti, Hussam; Soliman, Demiana; Shewale, Anand; Atkins, Jessica; Reddy, Pratap; Dominic, Paari

    2018-01-05

    Most data guiding revascularization of multivessel disease (MVD) and/or left main disease (LMD) favor coronary artery bypass grafting (CABG) over percutaneous coronary intervention (PCI). However, those data are based on trials comparing CABG to bare metal stents (BMS) or old generation drug eluting stents (OG-DES). Hence, it is essential to outcomes of CABG to those of new generation drug eluting stents (NG-DES). We searched PUBMED and Cochrane database for trials evaluating revascularization of MVD and/or LMD with CABG and/or PCI. A Bayesian network meta-analysis was performed to calculate odds ratios (OR) and 95% credible intervals (CrI). Primary outcome was major adverse cardiovascular events (MACE) at 3-5 years. Secondary outcomes were mortality, cerebrovascular accidents (CVA), myocardial infarction (MI) and repeat revascularization. We included 10 trials with a total of 9287 patients. CABG was associated with lower MACE when compared to BMS or OG-DES. However, MACE was not significantly different between CABG and NG-DES (OR 0.79, CrI 0.45-1.40). Moreover, there were no significant differences between CABG and NG-DES in mortality (OR 0.78, CrI 0.45-1.37), CVA (OR 0.93 CrI 0.35-2.2) or MI (OR 0.6, CrI 0.17-2.0). On the other hand, CABG was associated with lower repeat revascularization (OR 0.55, CrI 0.36-0.84). Our study suggests that NG-DES is an acceptable alternative to CABG in patients with MVD and/or LMD. However, repeat revascularization remains to be lower with CABG than with PCI. Copyright © 2018. Published by Elsevier Inc.

  6. Combined N-of-1 trials to investigate mexiletine in non-dystrophic myotonia using a Bayesian approach; study rationale and protocol.

    PubMed

    Stunnenberg, Bas C; Woertman, Willem; Raaphorst, Joost; Statland, Jeffrey M; Griggs, Robert C; Timmermans, Janneke; Saris, Christiaan G; Schouwenberg, Bas J; Groenewoud, Hans M; Stegeman, Dick F; van Engelen, Baziel G M; Drost, Gea; van der Wilt, Gert Jan

    2015-03-25

    To obtain evidence for the clinical and cost-effectiveness of treatments for patients with rare diseases is a challenge. Non-dystrophic myotonia (NDM) is a group of inherited, rare muscle diseases characterized by muscle stiffness. The reimbursement of mexiletine, the expert opinion drug for NDM, has been discontinued in some countries due to a lack of independent randomized controlled trials (RCTs). It remains unclear however, which concessions can be accepted towards the level 1 evidence needed for coverage decisions, in rare diseases. Considering the large number of rare diseases with a lack of treatment evidence, more experience with innovative trial designs is needed. Both NDM and mexiletine are well suited for an N-of-1 trial design. A Bayesian approach allows for the combination of N-of-1 trials, which enables the assessment of outcomes on the patient and group level simultaneously. We will combine 30 individual, double-blind, randomized, placebo-controlled N-of-1 trials of mexiletine (600 mg daily) vs. placebo in genetically confirmed NDM patients using hierarchical Bayesian modeling. Our results will be compared and combined with the main results of an international cross-over RCT (mexiletine vs. placebo in NDM) published in 2012 that will be used as an informative prior. Similar criteria of eligibility, treatment regimen, end-points and measurement instruments are employed as used in the international cross-over RCT. The treatment of patients with NDM with mexiletine offers a unique opportunity to compare outcomes and efficiency of novel N-of-1 trial-based designs and conventional approaches in producing evidence of clinical and cost-effectiveness of treatments for patients with rare diseases. ClinicalTrials.gov Identifier: NCT02045667.

  7. Bayesian hierarchical modeling for detecting safety signals in clinical trials.

    PubMed

    Xia, H Amy; Ma, Haijun; Carlin, Bradley P

    2011-09-01

    Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.

  8. An experimental validation of genomic selection in octoploid strawberry

    PubMed Central

    Gezan, Salvador A; Osorio, Luis F; Verma, Sujeet; Whitaker, Vance M

    2017-01-01

    The primary goal of genomic selection is to increase genetic gains for complex traits by predicting performance of individuals for which phenotypic data are not available. The objective of this study was to experimentally evaluate the potential of genomic selection in strawberry breeding and to define a strategy for its implementation. Four clonally replicated field trials, two in each of 2 years comprised of a total of 1628 individuals, were established in 2013–2014 and 2014–2015. Five complex yield and fruit quality traits with moderate to low heritability were assessed in each trial. High-density genotyping was performed with the Affymetrix Axiom IStraw90 single-nucleotide polymorphism array, and 17 479 polymorphic markers were chosen for analysis. Several methods were compared, including Genomic BLUP, Bayes B, Bayes C, Bayesian LASSO Regression, Bayesian Ridge Regression and Reproducing Kernel Hilbert Spaces. Cross-validation within training populations resulted in higher values than for true validations across trials. For true validations, Bayes B gave the highest predictive abilities on average and also the highest selection efficiencies, particularly for yield traits that were the lowest heritability traits. Selection efficiencies using Bayes B for parent selection ranged from 74% for average fruit weight to 34% for early marketable yield. A breeding strategy is proposed in which advanced selection trials are utilized as training populations and in which genomic selection can reduce the breeding cycle from 3 to 2 years for a subset of untested parents based on their predicted genomic breeding values. PMID:28090334

  9. QUEST - A Bayesian adaptive psychometric method

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Pelli, D. G.

    1983-01-01

    An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.

  10. 75 FR 6209 - Guidance for Industry and Food and Drug Administration; Guidance for the Use of Bayesian...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2006-D-0410] (formerly Docket No. 2006D-0191) Guidance for Industry and Food and Drug Administration; Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials; Availability AGENCY: Food and Drug...

  11. Effect of Industry Sponsorship on Dental Restorative Trials.

    PubMed

    Schwendicke, F; Tu, Y-K; Blunck, U; Paris, S; Göstemeyer, G

    2016-01-01

    Industry sponsorship was found to potentially introduce bias into clinical trials. We assessed the effects of industry sponsorship on the design, comparator choice, and findings of randomized controlled trials on dental restorative materials. A systematic review was performed via MEDLINE, CENTRAL, and EMBASE. Randomized trials on dental restorative and adhesive materials published 2005 to 2015 were included. The design of sponsored and nonsponsored trials was compared statistically (risk of bias, treatment indication, setting, transferability, sample size). Comparator choice and network geometry of sponsored and nonsponsored trials were assessed via network analysis. Material performance rankings in different trial types were estimated via Bayesian network meta-analysis. Overall, 114 studies were included (15,321 restorations in 5,232 patients). We found 21 and 41 (18% and 36%) trials being clearly or possibly industry sponsored, respectively. Trial design of sponsored and nonsponsored trials did not significantly differ for most assessed items. Sponsored trials evaluated restorations of load-bearing cavities significantly more often than nonsponsored trials, had longer follow-up periods, and showed significantly increased risk of detection bias. Regardless of sponsorship status, comparisons were mainly performed within material classes. The proportion of trials comparing against gold standard restorative or adhesive materials did not differ between trial types. If ranked for performance according to the need to re-treat (best: least re-treatments), most material combinations were ranked similarly in sponsored and nonsponsored trials. The effect of industry sponsorship on dental restorative trials seems limited. © International & American Associations for Dental Research 2015.

  12. Design of a Bayesian adaptive phase 2 proof-of-concept trial for BAN2401, a putative disease-modifying monoclonal antibody for the treatment of Alzheimer's disease.

    PubMed

    Satlin, Andrew; Wang, Jinping; Logovinsky, Veronika; Berry, Scott; Swanson, Chad; Dhadda, Shobha; Berry, Donald A

    2016-01-01

    Recent failures in phase 3 clinical trials in Alzheimer's disease (AD) suggest that novel approaches to drug development are urgently needed. Phase 3 risk can be mitigated by ensuring that clinical efficacy is established before initiating confirmatory trials, but traditional phase 2 trials in AD can be lengthy and costly. We designed a Bayesian adaptive phase 2, proof-of-concept trial with a clinical endpoint to evaluate BAN2401, a monoclonal antibody targeting amyloid protofibrils. The study design used dose response and longitudinal modeling. Simulations were used to refine study design features to achieve optimal operating characteristics. The study design includes five active treatment arms plus placebo, a clinical outcome, 12-month primary endpoint, and a maximum sample size of 800. The average overall probability of success is ≥80% when at least one dose shows a treatment effect that would be considered clinically meaningful. Using frequent interim analyses, the randomization ratios are adapted based on the clinical endpoint, and the trial can be stopped for success or futility before full enrollment. Bayesian statistics can enhance the efficiency of analyzing the study data. The adaptive randomization generates more data on doses that appear to be more efficacious, which can improve dose selection for phase 3. The interim analyses permit stopping as soon as a predefined signal is detected, which can accelerate decision making. Both features can reduce the size and duration of the trial. This study design can mitigate some of the risks associated with advancing to phase 3 in the absence of data demonstrating clinical efficacy. Limitations to the approach are discussed.

  13. Bayesian Dose-Response Modeling in Sparse Data

    NASA Astrophysics Data System (ADS)

    Kim, Steven B.

    This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a wrong parametric assumption. In this regard, we consider a robust experimental design which does not require any parametric assumption.

  14. Effect of supersaturated oxygen delivery on infarct size after percutaneous coronary intervention in acute myocardial infarction.

    PubMed

    Stone, Gregg W; Martin, Jack L; de Boer, Menko-Jan; Margheri, Massimo; Bramucci, Ezio; Blankenship, James C; Metzger, D Christopher; Gibbons, Raymond J; Lindsay, Barbara S; Weiner, Bonnie H; Lansky, Alexandra J; Krucoff, Mitchell W; Fahy, Martin; Boscardin, W John

    2009-10-01

    Myocardial salvage is often suboptimal after percutaneous coronary intervention in ST-segment elevation myocardial infarction. Posthoc subgroup analysis from a previous trial (AMIHOT I) suggested that intracoronary delivery of supersaturated oxygen (SSO(2)) may reduce infarct size in patients with large ST-segment elevation myocardial infarction treated early. A prospective, multicenter trial was performed in which 301 patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset were randomized to a 90-minute intracoronary SSO(2) infusion in the left anterior descending artery infarct territory (n=222) or control (n=79). The primary efficacy measure was infarct size in the intention-to-treat population (powered for superiority), and the primary safety measure was composite major adverse cardiovascular events at 30 days in the intention-to-treat and per-protocol populations (powered for noninferiority), with Bayesian hierarchical modeling used to allow partial pooling of evidence from AMIHOT I. Among 281 randomized patients with tc-99m-sestamibi single-photon emission computed tomography data in AMIHOT II, median (interquartile range) infarct size was 26.5% (8.5%, 44%) with control compared with 20% (6%, 37%) after SSO(2). The pooled adjusted infarct size was 25% (7%, 42%) with control compared with 18.5% (3.5%, 34.5%) after SSO(2) (P(Wilcoxon)=0.02; Bayesian posterior probability of superiority, 96.9%). The Bayesian pooled 30-day mean (+/-SE) rates of major adverse cardiovascular events were 5.0+/-1.4% for control and 5.9+/-1.4% for SSO(2) by intention-to-treat, and 5.1+/-1.5% for control and 4.7+/-1.5% for SSO(2) by per-protocol analysis (posterior probability of noninferiority, 99.5% and 99.9%, respectively). Among patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset, infusion of SSO(2) into the left anterior descending artery infarct territory results in a significant reduction in infarct size with noninferior rates of major adverse cardiovascular events at 30 days. Clinical Trial Registration- clinicaltrials.gov Identifier: NCT00175058.

  15. Bayesian Approach for Flexible Modeling of Semicompeting Risks Data

    PubMed Central

    Han, Baoguang; Yu, Menggang; Dignam, James J.; Rathouz, Paul J.

    2016-01-01

    Summary Semicompeting risks data arise when two types of events, non-terminal and terminal, are observed. When the terminal event occurs first, it censors the non-terminal event, but not vice versa. To account for possible dependent censoring of the non-terminal event by the terminal event and to improve prediction of the terminal event using the non-terminal event information, it is crucial to model their association properly. Motivated by a breast cancer clinical trial data analysis, we extend the well-known illness-death models to allow flexible random effects to capture heterogeneous association structures in the data. Our extension also represents a generalization of the popular shared frailty models that usually assume that the non-terminal event does not affect the hazards of the terminal event beyond a frailty term. We propose a unified Bayesian modeling approach that can utilize existing software packages for both model fitting and individual specific event prediction. The approach is demonstrated via both simulation studies and a breast cancer data set analysis. PMID:25274445

  16. A practical Bayesian stepped wedge design for community-based cluster-randomized clinical trials: The British Columbia Telehealth Trial.

    PubMed

    Cunanan, Kristen M; Carlin, Bradley P; Peterson, Kevin A

    2016-12-01

    Many clinical trial designs are impractical for community-based clinical intervention trials. Stepped wedge trial designs provide practical advantages, but few descriptions exist of their clinical implementational features, statistical design efficiencies, and limitations. Enhance efficiency of stepped wedge trial designs by evaluating the impact of design characteristics on statistical power for the British Columbia Telehealth Trial. The British Columbia Telehealth Trial is a community-based, cluster-randomized, controlled clinical trial in rural and urban British Columbia. To determine the effect of an Internet-based telehealth intervention on healthcare utilization, 1000 subjects with an existing diagnosis of congestive heart failure or type 2 diabetes will be enrolled from 50 clinical practices. Hospital utilization is measured using a composite of disease-specific hospital admissions and emergency visits. The intervention comprises online telehealth data collection and counseling provided to support a disease-specific action plan developed by the primary care provider. The planned intervention is sequentially introduced across all participating practices. We adopt a fully Bayesian, Markov chain Monte Carlo-driven statistical approach, wherein we use simulation to determine the effect of cluster size, sample size, and crossover interval choice on type I error and power to evaluate differences in hospital utilization. For our Bayesian stepped wedge trial design, simulations suggest moderate decreases in power when crossover intervals from control to intervention are reduced from every 3 to 2 weeks, and dramatic decreases in power as the numbers of clusters decrease. Power and type I error performance were not notably affected by the addition of nonzero cluster effects or a temporal trend in hospitalization intensity. Stepped wedge trial designs that intervene in small clusters across longer periods can provide enhanced power to evaluate comparative effectiveness, while offering practical implementation advantages in geographic stratification, temporal change, use of existing data, and resource distribution. Current population estimates were used; however, models may not reflect actual event rates during the trial. In addition, temporal or spatial heterogeneity can bias treatment effect estimates. © The Author(s) 2016.

  17. Value-based decision-making battery: A Bayesian adaptive approach to assess impulsive and risky behavior.

    PubMed

    Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N

    2018-02-01

    Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.

  18. Valence-Dependent Belief Updating: Computational Validation

    PubMed Central

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499

  19. Valence-Dependent Belief Updating: Computational Validation.

    PubMed

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.

  20. Using historical control information for the design and analysis of clinical trials with overdispersed count data.

    PubMed

    Gsteiger, Sandro; Neuenschwander, Beat; Mercier, Francois; Schmidli, Heinz

    2013-09-20

    Results from clinical trials are never interpreted in isolation. Previous studies in a similar setting provide valuable information for designing a new trial. For the analysis, however, the use of trial-external information is challenging and therefore controversial, although it seems attractive from an ethical or efficiency perspective. Here, we consider the formal use of historical control data on lesion counts in a multiple sclerosis trial. The approach to incorporating historical data is Bayesian, in that historical information is captured in a prior that accounts for between-trial variability and hence leads to discounting of historical data. We extend the meta-analytic-predictive approach, a random-effects meta-analysis of historical data combined with the prediction of the parameter in the new trial, from normal to overdispersed count data of individual-patient or aggregate-trial format. We discuss the prior derivation for the lesion mean count in the control group of the new trial for two populations. For the general population (without baseline enrichment), with 1936 control patients from nine historical trials, between-trial variability was moderate to substantial, leading to a prior effective sample size of about 45 control patients. For the more homogenous population (with enrichment), with 412 control patients from five historical trials, the prior effective sample size was approximately 63 patients. Although these numbers are small relative to the historical data, they are fairly typical in settings where between-trial heterogeneity is moderate. For phase II, reducing the number of control patients by 45 or by 63 may be an attractive option in many multiple sclerosis trials. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.

    PubMed

    Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat

    2017-07-01

    Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. GAD vaccine reduces insulin loss in recently diagnosed type 1 diabetes: findings from a Bayesian meta-analysis.

    PubMed

    Beam, Craig A; MacCallum, Colleen; Herold, Kevan C; Wherrett, Diane K; Palmer, Jerry; Ludvigsson, Johnny

    2017-01-01

    GAD is a major target of the autoimmune response that occurs in type 1 diabetes mellitus. Randomised controlled clinical trials of a GAD + alum vaccine in human participants have so far given conflicting results. In this study, we sought to see whether a clearer answer to the question of whether GAD65 has an effect on C-peptide could be reached by combining individual-level data from the randomised controlled trials using Bayesian meta-analysis to estimate the probability of a positive biological effect (a reduction in C-peptide loss compared with placebo approximately 1 year after the GAD vaccine). We estimate that there is a 98% probability that 20 μg GAD with alum administered twice yields a positive biological effect. The effect is probably a 15-20% reduction in the loss of C-peptide at approximately 1 year after treatment. This translates to an annual expected loss of between -0.250 and -0.235 pmol/ml in treated patients compared with an expected 2 h AUC loss of -0.294 pmol/ml at 1 year for untreated newly diagnosed patients. The biological effect of this vaccination should be developed further in order to reach clinically desirable reductions in insulin loss in patients recently diagnosed with type 1 diabetes.

  3. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    PubMed Central

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  4. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    ERIC Educational Resources Information Center

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A. G.

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are…

  5. qPR: An adaptive partial-report procedure based on Bayesian inference.

    PubMed

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  6. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  7. Causal assessment of surrogacy in a meta-analysis of colorectal cancer trials

    PubMed Central

    Li, Yun; Taylor, Jeremy M.G.; Elliott, Michael R.; Sargent, Daniel J.

    2011-01-01

    When the true end points (T) are difficult or costly to measure, surrogate markers (S) are often collected in clinical trials to help predict the effect of the treatment (Z). There is great interest in understanding the relationship among S, T, and Z. A principal stratification (PS) framework has been proposed by Frangakis and Rubin (2002) to study their causal associations. In this paper, we extend the framework to a multiple trial setting and propose a Bayesian hierarchical PS model to assess surrogacy. We apply the method to data from a large collection of colon cancer trials in which S and T are binary. We obtain the trial-specific causal measures among S, T, and Z, as well as their overall population-level counterparts that are invariant across trials. The method allows for information sharing across trials and reduces the nonidentifiability problem. We examine the frequentist properties of our model estimates and the impact of the monotonicity assumption using simulations. We also illustrate the challenges in evaluating surrogacy in the counterfactual framework that result from nonidentifiability. PMID:21252079

  8. Bayesian data analysis in population ecology: motivations, methods, and benefits

    USGS Publications Warehouse

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  9. Hyperbaric oxygen for carbon monoxide poisoning : a systematic review and critical analysis of the evidence.

    PubMed

    Buckley, Nicholas A; Isbister, Geoffrey K; Stokes, Barrie; Juurlink, David N

    2005-01-01

    Poisoning with carbon monoxide (CO) is an important cause of unintentional and intentional injury worldwide. Hyperbaric oxygen (HBO) enhances CO elimination and has been postulated to reduce the incidence of neurological sequelae. These observations have led some clinicians to use HBO for selected patients with CO poisoning, although there is considerable variability in clinical practice. This article assesses the effectiveness of HBO compared with normobaric oxygen (NBO) for the prevention of neurological sequelae in patients with acute CO poisoning. The following databases were searched: MEDLINE (1966 to present), EMBASE (1980 to present), and the Controlled Trials Register of the Cochrane Collaboration, supplemented by a manual review of bibliographies of identified articles and discussion with recognised content experts. All randomised controlled trials involving people acutely poisoned with CO, regardless of severity, were examined. The primary analysis included all trials from which data could be extracted. Sensitivity analysis examined trials with better validity (defined using the validated instrument of Jadad) and those enrolling more severely poisoned patients. Two reviewers independently extracted from each trial, including information on the number of randomised patients, types of participants, the dose and duration of the intervention, and the prevalence of neurological sequelae at follow-up. A pooled odds ratio (OR) for the presence of neurological symptoms at 1-month follow-up was calculated using a random effects model. Bayesian models were also investigated to illustrate the degree of certainty about clinical effectiveness. Eight randomised controlled trials were identified. Two had no evaluable data and were excluded. The remaining trials were of varying quality and two have been published only as abstracts. The severity of CO poisoning varied among trials. At 1-month follow-up after treatment, sequelae possibly related to CO poisoning were present in 242 of 761 patients (36.1%) treated with NBO, compared with 259 of 718 patients (31.8%) treated with HBO. Restricting the analysis to the trials with the highest quality scores or those that enrolled all patients regardless of severity did not change the lack of statistical significance in the outcome of the pooled analysis. We found empiric evidence of multiple biases that operated to inflate the benefit of HBO in two positive trials. In contrast, the interpretation of negative trials was hampered by low rates of follow-up, unusual interventions for control patients and inclusion of less severely poisoned patients. Collectively, these limitations may have led negative trials to overlook a real and substantial benefit of HBO (type II error). There is conflicting evidence regarding the efficacy of HBO treatment for patients with CO poisoning. Methodological shortcomings are evident in all published trials, with empiric evidence of bias in some, particularly those that suggest a benefit of HBO. Bayesian analysis further illustrates the uncertainty about a meaningful clinical benefit. Consequently, firm guidelines regarding the use of HBO for patients with CO poisoning cannot be established. Further research is needed to better define the role of HBO, if any, in the treatment of CO poisoning. Such research should not exclude patients with severe poisoning, have a primary outcome that is clinically meaningful and have oversight from an independent data monitoring and ethics committee.

  10. Bayesian inference on risk differences: an application to multivariate meta-analysis of adverse events in clinical trials.

    PubMed

    Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng

    2013-05-01

    Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.

  11. PRIVATE MANUFACTURERS’ THRESHOLDS TO INVEST IN COMPARATIVE EFFECTIVENESS TRIALS

    PubMed Central

    Basu, Anirban; Meltzer, David

    2015-01-01

    The recent rush of enthusiasm for public investment in comparative effectiveness research (CER) in the United States has focused attention on these public investments. However, little attention has been given to how changing public investment in CER may affect private manufacturers’ incentives for CER, which has long been a major source of CER. In this work, based on a simple revenue maximizing economic framework, we generate predictions on thresholds to invest in CER for a private manufacturer that compares its own product to its competitor’s in head to head trials. Our analysis shows that private incentives to invest in CER are determined by how the results of CER may affect the price and quantity of the product sold and the duration over which resulting changes in revenue would accrue given the time required to complete CER and the time from the completion of CER to the time of patent expiration. We highlight the result that private incentives may often be less than public incentives to invest in CER and may even be negative if the likelihood of adverse findings is sufficient. We find that these incentives imply a number of predictions about patterns of CER and how they will be affected by changes in public financing of CER and CER methods. For example, these incentives imply that incumbent patent holders may be less likely to invest in CER than entrants and that public investments in CER may crowd out similar private investments. In contrast, newer designs and methods for CER, such as Bayesian adaptive trials, which can reduce ex-post risk of unfavorable results and shorten the time for the production of CER, may increase the expected benefits of CER and may tend to increase private investment in CER as long as the costs of such innovative designs are not excessive. Bayesian approaches to design also naturally highlight the dynamic aspects of CER, allowing less expensive initial studies to guide decisions about future investments and thereby encouraging greater initial investments in CER. However, whether the potential effects we highlight of public funding of CER and of Bayesian approaches to trial design actually produce changes in private investment in CER remains an empirical question. PMID:22901018

  12. Private manufacturers' thresholds to invest in comparative effectiveness trials.

    PubMed

    Basu, Anirban; Meltzer, David

    2012-10-01

    The recent rush of enthusiasm for public investment in comparative effectiveness research (CER) in the US has focussed attention on these public investments. However, little attention has been given to how changing public investment in CER may affect private manufacturers' incentives for CER, which has long been a major source of CER. In this work, based on a simple revenue maximizing economic framework, we generate predictions on thresholds to invest in CER for a private manufacturer that compares its own product to a competitor's product in head-to-head trials. Our analysis shows that private incentives to invest in CER are determined by how the results of CER may affect the price and quantity of the product sold and the duration over which resulting changes in revenue would accrue, given the time required to complete CER and the time from the completion of CER to the time of patent expiration. We highlight the result that private incentives may often be less than public incentives to invest in CER and may even be negative if the likelihood of adverse findings is sufficient. We find that these incentives imply a number of predictions about patterns of CER and how they will be affected by changes in public financing of CER and CER methods. For example, these incentives imply that incumbent patent holders may be less likely to invest in CER than entrants and that public investments in CER may crowd out similar private investments. In contrast, newer designs and methods for CER, such as Bayesian adaptive trials, which can reduce ex post risk of unfavourable results and shorten the time for the production of CER, may increase the expected benefits of CER and may tend to increase private investment in CER as long as the costs of such innovative designs are not excessive. Bayesian approaches to design also naturally highlight the dynamic aspects of CER, allowing less expensive initial studies to guide decisions about future investments and thereby encouraging greater initial investments in CER. However, whether the potential effects we highlight of public funding of CER and of Bayesian approaches to trial design actually produce changes in private investment in CER remains an empirical question.

  13. Adaptive power priors with empirical Bayes for clinical trials.

    PubMed

    Gravestock, Isaac; Held, Leonhard

    2017-09-01

    Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much discussion as a way to increase the feasibility of trials in situations where patients are difficult to recruit. The best method to include this data is not yet clear, especially in the case when few historical studies are available. This paper looks at the power prior technique afresh in a binomial setting and examines some previously unexamined properties, such as Box P values, bias, and coverage. Additionally, it proposes an empirical Bayes-type approach to estimating the prior weight parameter by marginal likelihood. This estimate has advantages over previously criticised methods in that it varies commensurably with differences in the historical and current data and can choose weights near 1 when the data are similar enough. Fully Bayesian approaches are also considered. An analysis of the operating characteristics shows that the adaptive methods work well and that the various approaches have different strengths and weaknesses. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Patient-centered clinical trials.

    PubMed

    Chaudhuri, Shomesh E; Ho, Martin P; Irony, Telba; Sheldon, Murray; Lo, Andrew W

    2018-02-01

    We apply Bayesian decision analysis (BDA) to incorporate patient preferences in the regulatory approval process for new therapies. By assigning weights to type I and type II errors based on patient preferences, the significance level (α) and power (1-β) of a randomized clinical trial (RCT) for a new therapy can be optimized to maximize the value to current and future patients and, consequently, to public health. We find that for weight-loss devices, potentially effective low-risk treatments have optimal αs larger than the traditional one-sided significance level of 5%, whereas potentially less effective and riskier treatments have optimal αs below 5%. Moreover, the optimal RCT design, including trial size, varies with the risk aversion and time-to-access preferences and the medical need of the target population. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A Bayesian adaptive design for biomarker trials with linked treatments.

    PubMed

    Wason, James M S; Abraham, Jean E; Baird, Richard D; Gournaris, Ioannis; Vallier, Anne-Laure; Brenton, James D; Earl, Helena M; Mander, Adrian P

    2015-09-01

    Response to treatments is highly heterogeneous in cancer. Increased availability of biomarkers and targeted treatments has led to the need for trial designs that efficiently test new treatments in biomarker-stratified patient subgroups. We propose a novel Bayesian adaptive randomisation (BAR) design for use in multi-arm phase II trials where biomarkers exist that are potentially predictive of a linked treatment's effect. The design is motivated in part by two phase II trials that are currently in development. The design starts by randomising patients to the control treatment or to experimental treatments that the biomarker profile suggests should be active. At interim analyses, data from treated patients are used to update the allocation probabilities. If the linked treatments are effective, the allocation remains high; if ineffective, the allocation changes over the course of the trial to unlinked treatments that are more effective. Our proposed design has high power to detect treatment effects if the pairings of treatment with biomarker are correct, but also performs well when alternative pairings are true. The design is consistently more powerful than parallel-groups stratified trials. This BAR design is a powerful approach to use when there are pairings of biomarkers with treatments available for testing simultaneously.

  16. Should Perioperative Supplemental Oxygen Be Routinely Recommended for Surgical Patients? A Bayesian Meta-analysis

    PubMed Central

    Kao, Lillian S.; Millas, Stefanos G.; Pedroza, Claudia; Tyson, Jon E.; Lally, Kevin P.

    2012-01-01

    Objective The purpose of this study is to use updated data and Bayesian methods to evaluate the effectiveness of hyperoxia to reduce surgical site infections (SSIs) and/or mortality in both colorectal and all surgical patients. Because few trials assessed potential harms of hyperoxia, hazards were not included. Background Use of hyperoxia to reduce SSIs is controversial. Three recent meta-analyses have had conflicting conclusions. Methods A systematic literature search and review were performed. Traditional fixed-effect and random-effects meta-analyses and Bayesian meta-analysis were performed to evaluate SSIs and mortality. Results Traditional meta-analysis yielded a relative risk of an SSI with hyperoxia among all surgery patients of 0.84 (95% confidence interval, CI, 0.73–0.97) and 0.84 (95% CI 0.61–1.16) for the fixed-effect and random effects models respectively. The probabilities of any risk reduction in SSIs among all surgery patients were 77%, 81%, and 83% for skeptical, neutral, and enthusiastic priors. Subset analysis of colorectal surgery patients increased the probabilities to 86%, 89%, and 92%. The probabilities of at least a 10% reduction were 57%, 62%, and 68% for all surgical patients and 71%, 75%, and 80% among the colorectal surgery subset. Conclusions There is a moderately high probability of a benefit to hyperoxia in reducing SSIs in colorectal surgery patients; however, the magnitude of benefit is relatively small and might not exceed treatment hazards. Further studies should focus on generalizability to other patient populations or on treatment hazards and other outcomes. PMID:23160100

  17. Current treatment of ocular toxoplasmosis in immunocompetent patients: A network meta-analysis.

    PubMed

    Zhang, Yanxia; Lin, Xiao; Lu, Fangli

    2018-04-25

    Ocular toxoplasmosis (OT) is the most frequent form of infectious posterior uveitis caused by the protozoan parasite Toxoplasma gondii. To evaluate the available evidence in peer-reviewed publications about the most effective therapy for OT in immunocompetent patients, herein a systematic literature search was conducted using Embase, PubMed, Google Scholar, and the Cochrane Central Register of Controlled Trials (CENTRAL) database from January 1987 to October 2017, with search terms "OT", "retinochoroiditis", "treatment", and "immunocompetent"; search filters "controlled clinical trial", "randomized clinical trial", and "clinical trial". The included studies were performed to evaluate the various treatment modalities of OT. Different treatment regimens were compared with regard to the improvement of visual acuity, the resolution of vitreous inflammation, recurrence, and side-effects. We independently extracted data and assessed eligibility and risk of bias using the preferred reporting items for systematic reviews and meta-analysis, and resolved any disagreement through discussion. A Bayesian network meta-analysis model was used to evaluate the interesting outcomes of all the interventions. Total 10 trials of treatments for OT were found to meet the inclusion criteria. Six trials of treatments including clindamycin, azithromycin, and trimethoprim-sulfamethoxazole (TMP-SMX) were compared with conventional therapy (the combination of pyrimethamine, sulfadiazine, and prednisone) for evaluation of the effect on visual acuity, vitreous inflammation, recurrence of OT, and side-effects. Two trials were compared TMP-SMX with placebo. One trial was compared azithromycin with TMP-SMX. And another trial was compared among treatments with clindamycin, P-S, TMP-SMX, and placebo. Based on our network meta-analysis, therapy with TMP-SMX seems to be an alternative treatment of OT in immunocompetent patients. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Unconditional cash transfers for clinical and economic outcomes among HIV-affected Ugandan households: a bayesian randomised trial.

    PubMed

    Mills, Edward J; Adhvaryu, Achyuta; Jakiela, Pamela; Birungi, Josephine; Okoboi, Stephen; Chimulwa, Teddy; Wangisi, Jonathan; Achilla, Tina; Popoff, Evan; Golchi, Shirin; Karlan, Dean

    2018-05-28

    HIV infection has profound clinical and economic costs at the household level. This is particularly important in low-income settings, where access to additional sources of income or loans may be limited. While several microfinance interventions have been proposed, unconditional cash grants, a strategy to allow participants to choose how to use finances that may improve household security and health, has not previously been evaluated. We examined the effect of an unconditional cash transfer to HIV-infected individuals using a 2 x 2 factorial randomised trial in two rural districts in Uganda. Our primary outcomes were changes in CD4 cell count, sexual behaviors, and adherence to ART. Secondary outcomes were changes in household food security and adult mental health. We applied a Bayesian approach for our primary analysis. We randomized 2170 patients as participatants, with 1081 receiving a cash grant. We found no important intervention effects on CD4 t-cell counts between groups (mean difference [MD] 35.48, 95% Credible Interval [CrI] -59.9-1131.6), food security (odds ratio [OR] 1.22, 95% CrI: 0.47, 3.02), medication adherence (OR 3.15, 95% CrI: 0.58, 18.15), or sexual behavior (OR 0.45 95% CrI: 0.12, 1.55), or health expenditure in the previous 3 weeks (Mean Difference $2.65, 95% CrI: -9.30, 15.69). In secondary analysis, we detected an effect of mental planning on CD4 change between groups (104.2 cells, 9% CrI: 5.99, 202.16). We did not have data on viral load outcomes. Although all outcomes were associated with favorable point estimates, our trial did not demonstrate important effects of unconditional cash grants on health outcomes.

  19. Comparative Effectiveness of Mesalamine, Sulfasalazine, Corticosteroids, and Budesonide for the Induction of Remission in Crohn's Disease: A Bayesian Network Meta-analysis.

    PubMed

    Coward, Stephanie; Kuenzig, M Ellen; Hazlewood, Glen; Clement, Fiona; McBrien, Kerry; Holmes, Rebecca; Panaccione, Remo; Ghosh, Subrata; Seow, Cynthia H; Rezaie, Ali; Kaplan, Gilaad G

    2017-03-01

    Induction treatment of mild-to-moderate Crohn's disease is controversial. To compare the induction of remission between different doses of mesalamine, sulfasalazine, corticosteroids, and budesonide for active Crohn's disease. We identified randomized controlled trials from existing Cochrane reviews and an updated literature search in Medline, EMBASE, and CENTRAL to November 2015. We included randomized controlled trials (n = 22) in adult patients with Crohn's disease that compared budesonide, sulfasalazine, mesalamine, or corticosteroids with placebo or each other, for the induction of remission (8-17 wks). Mesalamine (above and below 2.4 g/d) and budesonide (above and below 6 mg/d) were stratified into low and high doses. Our primary outcome was remission, defined as a Crohn's Disease Activity Index score <150. A Bayesian random-effects network meta-analysis was performed on the proportion in remission. Corticosteroids (odds ratio [OR] = 3.80; 95% credible interval [CrI]: 2.48-5.66), high-dose budesonide (OR = 2.96; 95% CrI: 2.06-4.30), and high-dose mesalamine (OR = 2.29; 95% CrI: 1.58-3.33) were superior to placebo. Corticosteroids were similar to high-dose budesonide (OR = 1.21; 95% CrI: 0.84-1.76), but more effective than high-dose mesalamine (OR = 1.83; 95% CrI: 1.16-2.88). Sulfasalazine was not significantly superior to any therapy including placebo. Randomized controlled trials that use a strict definition of induction of remission and disease severity at enrollment to assess effectiveness in treating mild-to-moderate Crohn's disease are limited. Corticosteroids and high-dose budesonide were effective treatments for inducing remission in mild-to-moderate Crohn's disease. High-dose mesalamine is an option among patients preferring to avoid steroids.

  20. Is probabilistic bias analysis approximately Bayesian?

    PubMed Central

    MacLehose, Richard F.; Gustafson, Paul

    2011-01-01

    Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311

  1. Bayesian Mediation Analysis

    ERIC Educational Resources Information Center

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  2. Prior approval: the growth of Bayesian methods in psychology.

    PubMed

    Andrews, Mark; Baguley, Thom

    2013-02-01

    Within the last few years, Bayesian methods of data analysis in psychology have proliferated. In this paper, we briefly review the history or the Bayesian approach to statistics, and consider the implications that Bayesian methods have for the theory and practice of data analysis in psychology.

  3. Effects of Green Tea Gargling on the Prevention of Influenza Infection: An Analysis Using Bayesian Approaches.

    PubMed

    Ide, Kazuki; Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi

    2017-02-01

    The aim of this study is to analyze the data obtained from a randomized trial on the prevention of influenza by gargling with green tea, which gave nonsignificant results based on frequentist approaches, by using Bayesian approaches. The posterior proportion, with 95% credible interval (CrI), of influenza in each group was calculated. The Bayesian index θ is the probability that a hypothesis is true. In this case, θ is the probability that the hypothesis that green tea gargling reduced influenza compared with water gargling is true. Univariate and multivariate logistic regression analyses were also performed by using the Markov chain Monte Carlo method. The full analysis set included 747 participants. During the study period, influenza occurred in 44 participants (5.9%). The difference between the two independent binominal proportions was -0.019 (95% CrI, -0.054 to 0.015; θ = 0.87). The partial regression coefficients in the univariate analysis were -0.35 (95% CrI, -1.00 to 0.24) with use of a uniform prior and -0.34 (95% CrI, -0.96 to 0.27) with use of a Jeffreys prior. In the multivariate analysis, the values were -0.37 (95% CrI, -0.96 to 0.30) and -0.36 (95% CrI, -1.03 to 0.21), respectively. The difference between the two independent binominal proportions was less than 0, and θ was greater than 0.85. Therefore, green tea gargling may slightly reduce influenza compared with water gargling. This analysis suggests that green tea gargling can be an additional preventive measure for use with other pharmaceutical and nonpharmaceutical measures and indicates the need for additional studies to confirm the effect of green tea gargling.

  4. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  5. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  6. A Bayesian elicitation of veterinary beliefs regarding systemic dry cow therapy: variation and importance for clinical trial design.

    PubMed

    Higgins, H M; Dryden, I L; Green, M J

    2012-09-15

    The two key aims of this research were: (i) to conduct a probabilistic elicitation to quantify the variation in veterinarians' beliefs regarding the efficacy of systemic antibiotics when used as an adjunct to intra-mammary dry cow therapy and (ii) to investigate (in a Bayesian statistical framework) the strength of future research evidence required (in theory) to change the beliefs of practising veterinary surgeons regarding the efficacy of systemic antibiotics, given their current clinical beliefs. The beliefs of 24 veterinarians in 5 practices in England were quantified as probability density functions. Classic multidimensional scaling revealed major variations in beliefs both within and between veterinary practices which included: confident optimism, confident pessimism and considerable uncertainty. Of the 9 veterinarians interviewed holding further cattle qualifications, 6 shared a confidently pessimistic belief in the efficacy of systemic therapy and whilst 2 were more optimistic, they were also more uncertain. A Bayesian model based on a synthetic dataset from a randomised clinical trial (showing no benefit with systemic therapy) predicted how each of the 24 veterinarians' prior beliefs would alter as the size of the clinical trial increased, assuming that practitioners would update their beliefs rationally in accordance with Bayes' theorem. The study demonstrated the usefulness of probabilistic elicitation for evaluating the diversity and strength of practitioners' beliefs. The major variation in beliefs observed raises interest in the veterinary profession's approach to prescribing essential medicines. Results illustrate the importance of eliciting prior beliefs when designing clinical trials in order to increase the chance that trial data are of sufficient strength to alter the clinical beliefs of practitioners and do not merely serve to satisfy researchers. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Continuous event monitoring via a Bayesian predictive approach.

    PubMed

    Di, Jianing; Wang, Daniel; Brashear, H Robert; Dragalin, Vladimir; Krams, Michael

    2016-01-01

    In clinical trials, continuous monitoring of event incidence rate plays a critical role in making timely decisions affecting trial outcome. For example, continuous monitoring of adverse events protects the safety of trial participants, while continuous monitoring of efficacy events helps identify early signals of efficacy or futility. Because the endpoint of interest is often the event incidence associated with a given length of treatment duration (e.g., incidence proportion of an adverse event with 2 years of dosing), assessing the event proportion before reaching the intended treatment duration becomes challenging, especially when the event onset profile evolves over time with accumulated exposure. In particular, in the earlier part of the study, ignoring censored subjects may result in significant bias in estimating the cumulative event incidence rate. Such a problem is addressed using a predictive approach in the Bayesian framework. In the proposed approach, experts' prior knowledge about both the frequency and timing of the event occurrence is combined with observed data. More specifically, during any interim look, each event-free subject will be counted with a probability that is derived using prior knowledge. The proposed approach is particularly useful in early stage studies for signal detection based on limited information. But it can also be used as a tool for safety monitoring (e.g., data monitoring committee) during later stage trials. Application of the approach is illustrated using a case study where the incidence rate of an adverse event is continuously monitored during an Alzheimer's disease clinical trial. The performance of the proposed approach is also assessed and compared with other Bayesian and frequentist methods via simulation. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Preventive Strategies for Contrast-Induced Acute Kidney Injury in Patients Undergoing Percutaneous Coronary Procedures: Evidence From a Hierarchical Bayesian Network Meta-Analysis of 124 Trials and 28 240 Patients.

    PubMed

    Giacoppo, Daniele; Gargiulo, Giuseppe; Buccheri, Sergio; Aruta, Patrizia; Byrne, Robert A; Cassese, Salvatore; Dangas, George; Kastrati, Adnan; Mehran, Roxana; Tamburino, Corrado; Capodanno, Davide

    2017-05-01

    The effectiveness of currently available effective preventive strategies for contrast-induced acute kidney injury (CIAKI) is a matter of debate. We performed a Bayesian random-effects network meta-analysis of 124 trials (28 240 patients) comparing a total of 10 strategies: saline, statin, N-acetylcysteine (NAC), sodium bicarbonate (NaHCO 3 ), NAC+NaHCO 3 , ascorbic acid, xanthine, dopaminergic agent, peripheral ischemic preconditioning, and natriuretic peptide. Compared with saline, the risk of CIAKI was reduced by using statin (odds ratio [OR], 0.42; 95% credible interval [CrI], 0.26-0.67), xanthine (OR, 0.32; 95% CrI, 0.17-0.57), ischemic preconditioning (OR, 0.48; 95% CrI, 0.26-0.87), NAC+NaHCO 3 (OR, 0.50; 95% CrI, 0.33-0.76), NAC (OR, 0.68; 95% CrI, 0.55-0.84), and NaHCO 3 (OR, 0.66; 95% CrI, 0.47-0.90). The benefit of statin therapy was consistent across multiple sensitivity analyses, whereas the efficacy of all the other strategies was questioned by restricting the analysis to high-quality trials. Overall, high heterogeneity was observed for comparisons involving xanthine and ischemic preconditioning, although the impact of NAC and xanthine was probably influenced by publication bias/small-study effect. Hydration alone was the least effective preventive strategy for CIAKI. Meta-regressions did not reveal significant associations with baseline creatinine and contrast volume. In patients with diabetes mellitus, no strategy was found to reduce the incidence of CIAKI. In patients undergoing percutaneous coronary procedures, statin administration is associated with a marked and consistent reduction in the risk of CIAKI compared with saline. Although xanthine, NAC, NaHCO 3 , NAC+NaHCO 3 , ischemic preconditioning, and natriuretic peptide may have nephroprotective effects, these results were not consistent across multiple sensitivity analyses. © 2017 American Heart Association, Inc.

  9. Cognitive-Behavioral Therapy for Insomnia to Reduce Chronic Migraine: A Sequential Bayesian Analysis.

    PubMed

    Smitherman, Todd A; Kuka, Alexander J; Calhoun, Anne H; Walters, A Brooke Pellegrino; Davis-Martin, Rachel E; Ambrose, Carrie E; Rains, Jeanetta C; Houle, Timothy T

    2018-05-06

    Insomnia is frequently comorbid with chronic migraine, and small trials suggest that cognitive-behavioral treatment of insomnia (CBTi) may reduce migraine frequency. This study endeavored to provide a quantitative synthesis of existing CBTi trials for adults with chronic migraine using Bayesian statistical methods, given their utility in combining prior knowledge with sequentially gathered data. Completer analyses of 2 randomized trials comparing CBTi to a sham control intervention (Calhoun and Ford, 2007; Smitherman et al, 2016) were used to quantify the effects of a brief course of treatment on headache frequency. Change in headache frequency from baseline to the primary endpoint (6-8 weeks posttreatment) was regressed on group status using a Gaussian linear model with each study specified in the order of completion. To estimate the combined effect, posterior distributions from the Calhoun and Ford study were used as informative priors for conditioning on the Smitherman et al data. In a combined analysis of these prior studies, monthly headache frequency of the treatment group decreased by 6.2 days (95%CrI: -9.7 to -2.7) more than the control group, supporting an interpretation that there is a 97.5% chance that the treatment intervention is at least 2.7 days better than the control intervention. The analysis supports the hypothesis that at least for those who complete treatment, there is high probability that individuals who receive CBTi experience greater headache reduction than those who receive a control intervention equated for therapist time and out-of-session skills practice. Cognitive-behavioral interventions for comorbid insomnia hold promise for reducing headache frequency among those with chronic migraine. These findings add to a small but growing body of literature that migraineurs with comorbid conditions often respond well to behavioral interventions, and that targeting comorbidities may improve migraine itself. © 2018 American Headache Society.

  10. Modelling Trial-by-Trial Changes in the Mismatch Negativity

    PubMed Central

    Lieder, Falk; Daunizeau, Jean; Garrido, Marta I.; Friston, Karl J.; Stephan, Klaas E.

    2013-01-01

    The mismatch negativity (MMN) is a differential brain response to violations of learned regularities. It has been used to demonstrate that the brain learns the statistical structure of its environment and predicts future sensory inputs. However, the algorithmic nature of these computations and the underlying neurobiological implementation remain controversial. This article introduces a mathematical framework with which competing ideas about the computational quantities indexed by MMN responses can be formalized and tested against single-trial EEG data. This framework was applied to five major theories of the MMN, comparing their ability to explain trial-by-trial changes in MMN amplitude. Three of these theories (predictive coding, model adjustment, and novelty detection) were formalized by linking the MMN to different manifestations of the same computational mechanism: approximate Bayesian inference according to the free-energy principle. We thereby propose a unifying view on three distinct theories of the MMN. The relative plausibility of each theory was assessed against empirical single-trial MMN amplitudes acquired from eight healthy volunteers in a roving oddball experiment. Models based on the free-energy principle provided more plausible explanations of trial-by-trial changes in MMN amplitude than models representing the two more traditional theories (change detection and adaptation). Our results suggest that the MMN reflects approximate Bayesian learning of sensory regularities, and that the MMN-generating process adjusts a probabilistic model of the environment according to prediction errors. PMID:23436989

  11. Stochastic modeling of neurobiological time series: Power, coherence, Granger causality, and separation of evoked responses from ongoing activity

    NASA Astrophysics Data System (ADS)

    Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou

    2006-06-01

    In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.

  12. Computer modeling with randomized-controlled trial data informs the development of person-centered aged care homes.

    PubMed

    Chenoweth, Lynn; Vickland, Victor; Stein-Parbury, Jane; Jeon, Yun-Hee; Kenny, Patricia; Brodaty, Henry

    2015-10-01

    To answer questions on the essential components (services, operations and resources) of a person-centered aged care home (iHome) using computer simulation. iHome was developed with AnyLogic software using extant study data obtained from 60 Australian aged care homes, 900+ clients and 700+ aged care staff. Bayesian analysis of simulated trial data will determine the influence of different iHome characteristics on care service quality and client outcomes. Interim results: A person-centered aged care home (socio-cultural context) and care/lifestyle services (interactional environment) can produce positive outcomes for aged care clients (subjective experiences) in the simulated environment. Further testing will define essential characteristics of a person-centered care home.

  13. A SAS Interface for Bayesian Analysis with WinBUGS

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; McArdle, John J.; Wang, Lijuan; Hamagami, Fumiaki

    2008-01-01

    Bayesian methods are becoming very popular despite some practical difficulties in implementation. To assist in the practical application of Bayesian methods, we show how to implement Bayesian analysis with WinBUGS as part of a standard set of SAS routines. This implementation procedure is first illustrated by fitting a multiple regression model…

  14. Bayesian methods to determine performance differences and to quantify variability among centers in multi-center trials: the IHAST trial.

    PubMed

    Bayman, Emine O; Chaloner, Kathryn M; Hindman, Bradley J; Todd, Michael M

    2013-01-16

    To quantify the variability among centers and to identify centers whose performance are potentially outside of normal variability in the primary outcome and to propose a guideline that they are outliers. Novel statistical methodology using a Bayesian hierarchical model is used. Bayesian methods for estimation and outlier detection are applied assuming an additive random center effect on the log odds of response: centers are similar but different (exchangeable). The Intraoperative Hypothermia for Aneurysm Surgery Trial (IHAST) is used as an example. Analyses were adjusted for treatment, age, gender, aneurysm location, World Federation of Neurological Surgeons scale, Fisher score and baseline NIH stroke scale scores. Adjustments for differences in center characteristics were also examined. Graphical and numerical summaries of the between-center standard deviation (sd) and variability, as well as the identification of potential outliers are implemented. In the IHAST, the center-to-center variation in the log odds of favorable outcome at each center is consistent with a normal distribution with posterior sd of 0.538 (95% credible interval: 0.397 to 0.726) after adjusting for the effects of important covariates. Outcome differences among centers show no outlying centers. Four potential outlying centers were identified but did not meet the proposed guideline for declaring them as outlying. Center characteristics (number of subjects enrolled from the center, geographical location, learning over time, nitrous oxide, and temporary clipping use) did not predict outcome, but subject and disease characteristics did. Bayesian hierarchical methods allow for determination of whether outcomes from a specific center differ from others and whether specific clinical practices predict outcome, even when some centers/subgroups have relatively small sample sizes. In the IHAST no outlying centers were found. The estimated variability between centers was moderately large.

  15. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    PubMed Central

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B; Neyer, Franz J; van Aken, Marcel AG

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided. PMID:24116396

  16. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  17. Bayesian Adaptive Trial Design for a Newly Validated Surrogate Endpoint

    PubMed Central

    Renfro, Lindsay A.; Carlin, Bradley P.; Sargent, Daniel J.

    2011-01-01

    Summary The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as ‘valid.’ However, little consideration has been given to how a trial which utilizes a newly-validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multi-trial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively–perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O’Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly-validated surrogate endpoint for overall survival. PMID:21838811

  18. Incorporation of stochastic engineering models as prior information in Bayesian medical device trials.

    PubMed

    Haddad, Tarek; Himes, Adam; Thompson, Laura; Irony, Telba; Nair, Rajesh

    2017-01-01

    Evaluation of medical devices via clinical trial is often a necessary step in the process of bringing a new product to market. In recent years, device manufacturers are increasingly using stochastic engineering models during the product development process. These models have the capability to simulate virtual patient outcomes. This article presents a novel method based on the power prior for augmenting a clinical trial using virtual patient data. To properly inform clinical evaluation, the virtual patient model must simulate the clinical outcome of interest, incorporating patient variability, as well as the uncertainty in the engineering model and in its input parameters. The number of virtual patients is controlled by a discount function which uses the similarity between modeled and observed data. This method is illustrated by a case study of cardiac lead fracture. Different discount functions are used to cover a wide range of scenarios in which the type I error rates and power vary for the same number of enrolled patients. Incorporation of engineering models as prior knowledge in a Bayesian clinical trial design can provide benefits of decreased sample size and trial length while still controlling type I error rate and power.

  19. Estimating mono- and bi-phasic regression parameters using a mixture piecewise linear Bayesian hierarchical model

    PubMed Central

    Zhao, Rui; Catalano, Paul; DeGruttola, Victor G.; Michor, Franziska

    2017-01-01

    The dynamics of tumor burden, secreted proteins or other biomarkers over time, is often used to evaluate the effectiveness of therapy and to predict outcomes for patients. Many methods have been proposed to investigate longitudinal trends to better characterize patients and to understand disease progression. However, most approaches assume a homogeneous patient population and a uniform response trajectory over time and across patients. Here, we present a mixture piecewise linear Bayesian hierarchical model, which takes into account both population heterogeneity and nonlinear relationships between biomarkers and time. Simulation results show that our method was able to classify subjects according to their patterns of treatment response with greater than 80% accuracy in the three scenarios tested. We then applied our model to a large randomized controlled phase III clinical trial of multiple myeloma patients. Analysis results suggest that the longitudinal tumor burden trajectories in multiple myeloma patients are heterogeneous and nonlinear, even among patients assigned to the same treatment cohort. In addition, between cohorts, there are distinct differences in terms of the regression parameters and the distributions among categories in the mixture. Those results imply that longitudinal data from clinical trials may harbor unobserved subgroups and nonlinear relationships; accounting for both may be important for analyzing longitudinal data. PMID:28723910

  20. Exposure to isoflavone-containing soy products and endothelial function: a Bayesian meta-analysis of randomized controlled trials.

    PubMed

    Beavers, D P; Beavers, K M; Miller, M; Stamey, J; Messina, M J

    2012-03-01

    To determine whether and to what degree exposure to isoflavone-containing soy products affects EF. Endothelial dysfunction has been identified as an independent coronary heart disease risk factor and a strong predictor of long-term cardiovascular morbidity and mortality. Data on the effects of exposure to isoflavone-containing soy products on EF are conflicting. A comprehensive literature search was conducted using the PUBMED database (National Library of Medicine, Bethesda, MD) inclusively through August 21, 2009 on RCTs using the keywords: soy, isoflavone, phytoestrogen, EF, flow mediated vasodilation, and FMD. A Bayesian meta-analysis was conducted to provide a comprehensive account of the effect of isoflavone-containing soy products on EF, as measured by FMD. A total of 17 RCTs were selected as having sufficient data for study inclusion. The overall mean absolute change in FMD (95% Bayesian CI) for isoflavone-containing soy product interventions was 1.15% (-0.52, 2.75). When the effects of separate interventions were considered, the treatment effect for isolated isoflavones was 1.98% (0.07, 3.97) compared to 0.72% (-1.39, 2.90) for isoflavone-containing soy protein. The models were not improved when considering study-specific effects such as cuff measurement location, prescribed dietary modification, and impaired baseline FMD. Cumulative evidence from the RCTs included in this meta-analysis indicates that exposure to soy isoflavones can modestly, but significantly, improve EF as measured by FMD. Therefore, exposure to isoflavone supplements may beneficially influence vascular health. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. A Bayesian adaptive design for biomarker trials with linked treatments

    PubMed Central

    Wason, James M S; Abraham, Jean E; Baird, Richard D; Gournaris, Ioannis; Vallier, Anne-Laure; Brenton, James D; Earl, Helena M; Mander, Adrian P

    2015-01-01

    Background: Response to treatments is highly heterogeneous in cancer. Increased availability of biomarkers and targeted treatments has led to the need for trial designs that efficiently test new treatments in biomarker-stratified patient subgroups. Methods: We propose a novel Bayesian adaptive randomisation (BAR) design for use in multi-arm phase II trials where biomarkers exist that are potentially predictive of a linked treatment's effect. The design is motivated in part by two phase II trials that are currently in development. The design starts by randomising patients to the control treatment or to experimental treatments that the biomarker profile suggests should be active. At interim analyses, data from treated patients are used to update the allocation probabilities. If the linked treatments are effective, the allocation remains high; if ineffective, the allocation changes over the course of the trial to unlinked treatments that are more effective. Results: Our proposed design has high power to detect treatment effects if the pairings of treatment with biomarker are correct, but also performs well when alternative pairings are true. The design is consistently more powerful than parallel-groups stratified trials. Conclusions: This BAR design is a powerful approach to use when there are pairings of biomarkers with treatments available for testing simultaneously. PMID:26263479

  2. Statistical innovations in the medical device world sparked by the FDA.

    PubMed

    Campbell, Gregory; Yue, Lilly Q

    2016-01-01

    The world of medical devices while highly diverse is extremely innovative, and this facilitates the adoption of innovative statistical techniques. Statisticians in the Center for Devices and Radiological Health (CDRH) at the Food and Drug Administration (FDA) have provided leadership in implementing statistical innovations. The innovations discussed include: the incorporation of Bayesian methods in clinical trials, adaptive designs, the use and development of propensity score methodology in the design and analysis of non-randomized observational studies, the use of tipping-point analysis for missing data, techniques for diagnostic test evaluation, bridging studies for companion diagnostic tests, quantitative benefit-risk decisions, and patient preference studies.

  3. Incorporating historical information in biosimilar trials: Challenges and a hybrid Bayesian-frequentist approach.

    PubMed

    Mielke, Johanna; Schmidli, Heinz; Jones, Byron

    2018-05-01

    For the approval of biosimilars, it is, in most cases, necessary to conduct large Phase III clinical trials in patients to convince the regulatory authorities that the product is comparable in terms of efficacy and safety to the originator product. As the originator product has already been studied in several trials beforehand, it seems natural to include this historical information into the showing of equivalent efficacy. Since all studies for the regulatory approval of biosimilars are confirmatory studies, it is required that the statistical approach has reasonable frequentist properties, most importantly, that the Type I error rate is controlled-at least in all scenarios that are realistic in practice. However, it is well known that the incorporation of historical information can lead to an inflation of the Type I error rate in the case of a conflict between the distribution of the historical data and the distribution of the trial data. We illustrate this issue and confirm, using the Bayesian robustified meta-analytic-predictive (MAP) approach as an example, that simultaneously controlling the Type I error rate over the complete parameter space and gaining power in comparison to a standard frequentist approach that only considers the data in the new study, is not possible. We propose a hybrid Bayesian-frequentist approach for binary endpoints that controls the Type I error rate in the neighborhood of the center of the prior distribution, while improving the power. We study the properties of this approach in an extensive simulation study and provide a real-world example. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Bayesian hierarchical models for cost-effectiveness analyses that use data from cluster randomized trials.

    PubMed

    Grieve, Richard; Nixon, Richard; Thompson, Simon G

    2010-01-01

    Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where randomization is at the level of the cluster (for example, the hospital or primary care provider) rather than the individual. Costs (and outcomes) within clusters may be correlated so that the assumption made by standard bivariate regression models, that observations are independent, is incorrect. This study develops a flexible modeling framework to acknowledge the clustering in CEA that use CRTs. The authors extend previous Bayesian bivariate models for CEA of multicenter trials to recognize the specific form of clustering in CRTs. They develop new Bayesian hierarchical models (BHMs) that allow mean costs and outcomes, and also variances, to differ across clusters. They illustrate how each model can be applied using data from a large (1732 cases, 70 primary care providers) CRT evaluating alternative interventions for reducing postnatal depression. The analyses compare cost-effectiveness estimates from BHMs with standard bivariate regression models that ignore the data hierarchy. The BHMs show high levels of cost heterogeneity across clusters (intracluster correlation coefficient, 0.17). Compared with standard regression models, the BHMs yield substantially increased uncertainty surrounding the cost-effectiveness estimates, and altered point estimates. The authors conclude that ignoring clustering can lead to incorrect inferences. The BHMs that they present offer a flexible modeling framework that can be applied more generally to CEA that use CRTs.

  5. Predictors of Outcome in Traumatic Brain Injury: New Insight Using Receiver Operating Curve Indices and Bayesian Network Analysis.

    PubMed

    Zador, Zsolt; Sperrin, Matthew; King, Andrew T

    2016-01-01

    Traumatic brain injury remains a global health problem. Understanding the relative importance of outcome predictors helps optimize our treatment strategies by informing assessment protocols, clinical decisions and trial designs. In this study we establish importance ranking for outcome predictors based on receiver operating indices to identify key predictors of outcome and create simple predictive models. We then explore the associations between key outcome predictors using Bayesian networks to gain further insight into predictor importance. We analyzed the corticosteroid randomization after significant head injury (CRASH) trial database of 10008 patients and included patients for whom demographics, injury characteristics, computer tomography (CT) findings and Glasgow Outcome Scale (GCS) were recorded (total of 13 predictors, which would be available to clinicians within a few hours following the injury in 6945 patients). Predictions of clinical outcome (death or severe disability at 6 months) were performed using logistic regression models with 5-fold cross validation. Predictive performance was measured using standardized partial area (pAUC) under the receiver operating curve (ROC) and we used Delong test for comparisons. Variable importance ranking was based on pAUC targeted at specificity (pAUCSP) and sensitivity (pAUCSE) intervals of 90-100%. Probabilistic associations were depicted using Bayesian networks. Complete AUC analysis showed very good predictive power (AUC = 0.8237, 95% CI: 0.8138-0.8336) for the complete model. Specificity focused importance ranking highlighted age, pupillary, motor responses, obliteration of basal cisterns/3rd ventricle and midline shift. Interestingly when targeting model sensitivity, the highest-ranking variables were age, severe extracranial injury, verbal response, hematoma on CT and motor response. Simplified models, which included only these key predictors, had similar performance (pAUCSP = 0.6523, 95% CI: 0.6402-0.6641 and pAUCSE = 0.6332, 95% CI: 0.62-0.6477) compared to the complete models (pAUCSP = 0.6664, 95% CI: 0.6543-0.679, pAUCSE = 0.6436, 95% CI: 0.6289-0.6585, de Long p value 0.1165 and 0.3448 respectively). Bayesian networks showed the predictors that did not feature in the simplified models were associated with those that did. We demonstrate that importance based variable selection allows simplified predictive models to be created while maintaining prediction accuracy. Variable selection targeting specificity confirmed key components of clinical assessment in TBI whereas sensitivity based ranking suggested extracranial injury as one of the important predictors. These results help refine our approach to head injury assessment, decision-making and outcome prediction targeted at model sensitivity and specificity. Bayesian networks proved to be a comprehensive tool for depicting probabilistic associations for key predictors giving insight into why the simplified model has maintained accuracy.

  6. The Application of Bayesian Analysis to Issues in Developmental Research

    ERIC Educational Resources Information Center

    Walker, Lawrence J.; Gustafson, Paul; Frimer, Jeremy A.

    2007-01-01

    This article reviews the concepts and methods of Bayesian statistical analysis, which can offer innovative and powerful solutions to some challenging analytical problems that characterize developmental research. In this article, we demonstrate the utility of Bayesian analysis, explain its unique adeptness in some circumstances, address some…

  7. A default Bayesian hypothesis test for mediation.

    PubMed

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  8. A Tutorial in Bayesian Potential Outcomes Mediation Analysis.

    PubMed

    Miočević, Milica; Gonzalez, Oscar; Valente, Matthew J; MacKinnon, David P

    2018-01-01

    Statistical mediation analysis is used to investigate intermediate variables in the relation between independent and dependent variables. Causal interpretation of mediation analyses is challenging because randomization of subjects to levels of the independent variable does not rule out the possibility of unmeasured confounders of the mediator to outcome relation. Furthermore, commonly used frequentist methods for mediation analysis compute the probability of the data given the null hypothesis, which is not the probability of a hypothesis given the data as in Bayesian analysis. Under certain assumptions, applying the potential outcomes framework to mediation analysis allows for the computation of causal effects, and statistical mediation in the Bayesian framework gives indirect effects probabilistic interpretations. This tutorial combines causal inference and Bayesian methods for mediation analysis so the indirect and direct effects have both causal and probabilistic interpretations. Steps in Bayesian causal mediation analysis are shown in the application to an empirical example.

  9. Efficacy of nonvenous medications for acute convulsive seizures

    PubMed Central

    Kothari, Harsh; Zhang, Zongjun; Han, Baoguang; Horn, Paul S.; Glauser, Tracy A.

    2015-01-01

    Objective: This is a network meta-analysis of nonvenous drugs used in randomized controlled trials (RCTs) for treatment of acute convulsive seizures and convulsive status epilepticus. Methods: Literature was searched according to Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines for RCTs examining treatment of acute convulsive seizures or status epilepticus with at least one of the study arms being a nonvenous medication. After demographic and outcome data extraction, a Bayesian network meta-analysis was performed and efficacy results were summarized using treatment effects and their credible intervals (CrI). We also calculated the probability of each route–drug combination being the most clinically effective for a given outcome, and provided their Bayesian hierarchical ranking. Results: This meta-analysis of 16 studies found that intramuscular midazolam (IM-MDZ) is superior to other nonvenous medications regarding time to seizure termination after administration (2.145 minutes, 95% CrI 1.308–3.489), time to seizure cessation after arrival in the hospital (3.841 minutes, 95% CrI 2.697–5.416), and time to initiate treatment (0.779 minutes, 95% CrI 0.495–1.221). Additionally, intranasal midazolam (IN-MDZ) was adjudged most efficacious for seizure cessation within 10 minutes of administration (90.4% of participants, 95% CrI 79.4%–96.9%), and persistent seizure cessation for ≥1 hour (78.5% of participants, 95% CrI 59.5%–92.1%). Paucity of RCTs produced evidence gaps resulting in small networks, routes/drugs included in some networks but not others, and some trials not being connected to any network. Conclusions: Despite the evidence gaps, IM-MDZ and IN-MDZ exhibit the best efficacy data for the nonvenous treatment of acute convulsive seizures or status epilepticus. PMID:26511448

  10. Bayesian GGE biplot models applied to maize multi-environments trials.

    PubMed

    de Oliveira, L A; da Silva, C P; Nuvunga, J J; da Silva, A Q; Balestre, M

    2016-06-17

    The additive main effects and multiplicative interaction (AMMI) and the genotype main effects and genotype x environment interaction (GGE) models stand out among the linear-bilinear models used in genotype x environment interaction studies. Despite the advantages of their use to describe genotype x environment (AMMI) or genotype and genotype x environment (GGE) interactions, these methods have known limitations that are inherent to fixed effects models, including difficulty in treating variance heterogeneity and missing data. Traditional biplots include no measure of uncertainty regarding the principal components. The present study aimed to apply the Bayesian approach to GGE biplot models and assess the implications for selecting stable and adapted genotypes. Our results demonstrated that the Bayesian approach applied to GGE models with non-informative priors was consistent with the traditional GGE biplot analysis, although the credible region incorporated into the biplot enabled distinguishing, based on probability, the performance of genotypes, and their relationships with the environments in the biplot. Those regions also enabled the identification of groups of genotypes and environments with similar effects in terms of adaptability and stability. The relative position of genotypes and environments in biplots is highly affected by the experimental accuracy. Thus, incorporation of uncertainty in biplots is a key tool for breeders to make decisions regarding stability selection and adaptability and the definition of mega-environments.

  11. Conducting indirect-treatment-comparison and network-meta-analysis studies: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 2.

    PubMed

    Hoaglin, David C; Hawkins, Neil; Jansen, Jeroen P; Scott, David A; Itzler, Robbin; Cappelleri, Joseph C; Boersma, Cornelis; Thompson, David; Larholt, Kay M; Diaz, Mireya; Barrett, Annabel

    2011-06-01

    Evidence-based health care decision making requires comparison of all relevant competing interventions. In the absence of randomized controlled trials involving a direct comparison of all treatments of interest, indirect treatment comparisons and network meta-analysis provide useful evidence for judiciously selecting the best treatment(s). Mixed treatment comparisons, a special case of network meta-analysis, combine direct evidence and indirect evidence for particular pairwise comparisons, thereby synthesizing a greater share of the available evidence than traditional meta-analysis. This report from the International Society for Pharmacoeconomics and Outcomes Research Indirect Treatment Comparisons Good Research Practices Task Force provides guidance on technical aspects of conducting network meta-analyses (our use of this term includes most methods that involve meta-analysis in the context of a network of evidence). We start with a discussion of strategies for developing networks of evidence. Next we briefly review assumptions of network meta-analysis. Then we focus on the statistical analysis of the data: objectives, models (fixed-effects and random-effects), frequentist versus Bayesian approaches, and model validation. A checklist highlights key components of network meta-analysis, and substantial examples illustrate indirect treatment comparisons (both frequentist and Bayesian approaches) and network meta-analysis. A further section discusses eight key areas for future research. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. A Bayesian approach to meta-analysis of plant pathology studies.

    PubMed

    Mila, A L; Ngugi, H K

    2011-01-01

    Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.

  13. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  14. Characterizing Oscillatory Bursts in Single-Trial EEG Data

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Shah, A. S.; Lakatos, P.; Schroeder, C. E.

    2004-01-01

    Oscillatory bursts in numerous bands ranging from low (theta) to high frequencies (e.g., gamma) undoubtedly play an important role in cortical dynamics. Largely because of the inadequacy of existing analytic techniques. however, oscillatory bursts and their role in cortical processing remains poorly understood. To study oscillatory bursts effectively one must be able to isolate them and characterize them in the single trial. We describe a series of straightforward analysis techniques that produce useful indices of burst characteristics. First, stimulus-evoked responses are estimated using Differentially Variable Component Analysis (dVCA), and are subtracted from the single-trial. The single-trial characteristics of the evoked responses are stored to identify possible correlations with burst activity. Time-frequency (T-F), or wavelet, analyses are then applied to the single trial residuals. While T-F plots have been used in recent studies to identify and isolate bursts, we go further by fitting each burst in the T-F plot with a two-dimensional Gaussian. This provides a set of burst characteristics, such as, center time. burst duration, center frequency. frequency dispersion. and amplitude, all of which contribute to the accurate characterization of the individual burst. The burst phase can also be estimated. Burst characteristics can be quantified with several standard techniques (e.g.. histogramming and clustering), as well as Bayesian techniques (e.g., blocking) to allow a more parametric description analysis of the characteristics of oscillatory bursts, and the relationships of specific parameters to cortical excitability and stimulus integration.

  15. On Bayesian methods of exploring qualitative interactions for targeted treatment.

    PubMed

    Chen, Wei; Ghosh, Debashis; Raghunathan, Trivellore E; Norkin, Maxim; Sargent, Daniel J; Bepler, Gerold

    2012-12-10

    Providing personalized treatments designed to maximize benefits and minimizing harms is of tremendous current medical interest. One problem in this area is the evaluation of the interaction between the treatment and other predictor variables. Treatment effects in subgroups having the same direction but different magnitudes are called quantitative interactions, whereas those having opposite directions in subgroups are called qualitative interactions (QIs). Identifying QIs is challenging because they are rare and usually unknown among many potential biomarkers. Meanwhile, subgroup analysis reduces the power of hypothesis testing and multiple subgroup analyses inflate the type I error rate. We propose a new Bayesian approach to search for QI in a multiple regression setting with adaptive decision rules. We consider various regression models for the outcome. We illustrate this method in two examples of phase III clinical trials. The algorithm is straightforward and easy to implement using existing software packages. We provide a sample code in Appendix A. Copyright © 2012 John Wiley & Sons, Ltd.

  16. MIDAS: a practical Bayesian design for platform trials with molecularly targeted agents.

    PubMed

    Yuan, Ying; Guo, Beibei; Munsell, Mark; Lu, Karen; Jazaeri, Amir

    2016-09-30

    Recent success of immunotherapy and other targeted therapies in cancer treatment has led to an unprecedented surge in the number of novel therapeutic agents that need to be evaluated in clinical trials. Traditional phase II clinical trial designs were developed for evaluating one candidate treatment at a time and thus not efficient for this task. We propose a Bayesian phase II platform design, the multi-candidate iterative design with adaptive selection (MIDAS), which allows investigators to continuously screen a large number of candidate agents in an efficient and seamless fashion. MIDAS consists of one control arm, which contains a standard therapy as the control, and several experimental arms, which contain the experimental agents. Patients are adaptively randomized to the control and experimental agents based on their estimated efficacy. During the trial, we adaptively drop inefficacious or overly toxic agents and 'graduate' the promising agents from the trial to the next stage of development. Whenever an experimental agent graduates or is dropped, the corresponding arm opens immediately for testing the next available new agent. Simulation studies show that MIDAS substantially outperforms the conventional approach. The proposed design yields a significantly higher probability for identifying the promising agents and dropping the futile agents. In addition, MIDAS requires only one master protocol, which streamlines trial conduct and substantially decreases the overhead burden. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. MIDAS: A Practical Bayesian Design for Platform Trials with Molecularly Targeted Agents

    PubMed Central

    Yuan, Ying; Guo, Beibei; Munsell, Mark; Lu, Karen; Jazaeri, Amir

    2016-01-01

    Recent success of immunotherapy and other targeted therapies in cancer treatment has led to an unprecedented surge in the number of novel therapeutic agents that need to be evaluated in clinical trials. Traditional phase II clinical trial designs were developed for evaluating one candidate treatment at a time, and thus not efficient for this task. We propose a Bayesian phase II platform design, the Multi-candidate Iterative Design with Adaptive Selection (MIDAS), which allows investigators to continuously screen a large number of candidate agents in an efficient and seamless fashion. MIDAS consists of one control arm, which contains a standard therapy as the control, and several experimental arms, which contain the experimental agents. Patients are adaptively randomized to the control and experimental agents based on their estimated efficacy. During the trial, we adaptively drop inefficacious or overly toxic agents and “graduate” the promising agents from the trial to the next stage of development. Whenever an experimental agent graduates or is dropped, the corresponding arm opens immediately for testing the next available new agent. Simulation studies show that MIDAS substantially outperforms the conventional approach. The proposed design yields a significantly higher probability for identifying the promising agents and dropping the futile agents. In addition, MIDAS requires only one master protocol, which streamlines trial conduct and substantially decreases the overhead burden. PMID:27112322

  18. Bayesian Statistics for Biological Data: Pedigree Analysis

    ERIC Educational Resources Information Center

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  19. Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation

    NASA Technical Reports Server (NTRS)

    Jefferys, William H.; Berger, James O.

    1992-01-01

    'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.

  20. Assessing the association between natural food folate intake and blood folate concentrations: a systematic review and Bayesian meta-analysis of trials and observational studies.

    PubMed

    Marchetta, Claire M; Devine, Owen J; Crider, Krista S; Tsang, Becky L; Cordero, Amy M; Qi, Yan Ping; Guo, Jing; Berry, Robert J; Rosenthal, Jorge; Mulinare, Joseph; Mersereau, Patricia; Hamner, Heather C

    2015-04-10

    Folate is found naturally in foods or as synthetic folic acid in dietary supplements and fortified foods. Adequate periconceptional folic acid intake can prevent neural tube defects. Folate intake impacts blood folate concentration; however, the dose-response between natural food folate and blood folate concentrations has not been well described. We estimated this association among healthy females. A systematic literature review identified studies (1 1992-3 2014) with both natural food folate intake alone and blood folate concentration among females aged 12-49 years. Bayesian methods were used to estimate regression model parameters describing the association between natural food folate intake and subsequent blood folate concentration. Seven controlled trials and 29 observational studies met the inclusion criteria. For the six studies using microbiologic assay (MA) included in the meta-analysis, we estimate that a 6% (95% Credible Interval (CrI): 4%, 9%) increase in red blood cell (RBC) folate concentration and a 7% (95% CrI: 1%, 12%) increase in serum/plasma folate concentration can occur for every 10% increase in natural food folate intake. Using modeled results, we estimate that a natural food folate intake of ≥ 450 μg dietary folate equivalents (DFE)/day could achieve the lower bound of an RBC folate concentration (~ 1050 nmol/L) associated with the lowest risk of a neural tube defect. Natural food folate intake affects blood folate concentration and adequate intakes could help women achieve a RBC folate concentration associated with a risk of 6 neural tube defects/10,000 live births.

  1. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  2. Locally advanced squamous cell carcinoma of the head and neck: A systematic review and Bayesian network meta-analysis of the currently available treatment options.

    PubMed

    Iocca, Oreste; Farcomeni, Alessio; Di Rocco, Arianna; Di Maio, Pasquale; Golusinski, Paweł; Pardiñas López, Simón; Savo, Alfredo; Pellini, Raul; Spriano, Giuseppe

    2018-05-01

    There are still many unresolved questions in the management of locally advanced Head and Neck Cancer (HNC). Many chemotherapeutic drugs and radiotherapy fractionation schemes are available and not all have been evaluated in head-to-head clinical trials. This systematic review and Bayesian network meta-analysis aims to compare the available treatment strategies and chemotherapeutic options for locally advanced HNC. We performed a search on bibliography databases, trials registries and meetings proceedings for published and unpublished randomized trials from January 1st 2000 to December 1st 2017. Trials had to compare systemic interventions and radiotherapy (RT) approaches for locally advanced, non-metastatic HNC. Trials recruiting patients whose surgery was the first treatment option, sample size less than 20 per arm or that did not use randomization for treatment allocation were excluded from the analysis. Summary estimates on Overall survival (OS), Progression-free survival (PFS) and toxicity outcomes (grade 3-4 mucositis and neutropenia) were extracted from the included studies on a predefined database sheet. Bias was assessed through the Chocrane risk of bias assessment tool. We performed a set of pair-wise meta-analyses using a random effect model. We also performed a random effect network meta-analysis under a Bayesian framework. From the 57 included trials, including 15,723 patients, was possible to conduct analysis on 26 treatments for OS, 22 treatments for PFS and 10 treatments for toxicity. In terms of OS Concurrent chemoradiotherapy (CCRT) with cisplatin (HR 0.70, 95% CrI [credible interval] 0.62-0.78) and cetuximab on top of CCRT (HR 0.7, 95% CrI 0.5-0.97) are clearly superior to conventional RT alone. Induction chemotherapy (IC) with cisplatin and fluorouracil (HR 0.74, 95% CrI 0.52-0.95), IC with docetaxel, cisplatin, fluorouracil (HR 0.55, 95% CrI 0.54-0.89) and IC with paclitaxel, cisplatin, fluorouracil (HR 0.55, 95% CrI 0.34-0.89) before CCRT are all superior to conventional RT. CCRT with cisplatin is also superior to altered fractionation RT (HR 0.74, 95% CrI 0.64-0.84). Altered fractionation RT is not superior to conventional RT (HR 0.95, 95% CrI 0.85-1.06). Regarding PFS, CCRT with cisplatin (HR 0.72, 95% CrI 0.63-0.83), cisplatin and fluorouracil (HR 0.67, 95% CrI 0.5-0.88), carboplatin (HR 0.63, 95% CrI 0.46-0.87), carboplatin and fluorouracil (HR 0.75, 95% CrI 0.56-1), IC with cisplatin and fluorouracil (HR 0.59, 95% CrI 0.45-0.78), IC with docetaxel, cisplatin and fluorouracil (HR 0.53, 95% CrI 0.41-0.68) and IC with paclitaxel, cisplatin and fluorouracil (HR 0.59, 95% CrI 0.35-0.99) are superior to conventional RT and altered fractionation RT. IC with docetaxel, cisplatin and fluorouracil shows a significant superiority against CCRT with cisplatin (HR 0.73 95% CrI 0.58-0.92). Altered fractionation RT is not superior to conventional RT (HR 0.91, 95% CrI 0.81-1.02). Altered fractionation increases the risk of developing grade 3-4 mucositis compared to conventional RT (OR 3.74 95% 1.64-8.67) INTERPRETATION: CCRT with cisplatin remains the gold standard of treatment. Taxane based IC regimens may have a impact on locally advanced disease. Altered fractionation RT is inferior to CCRT and also does not seem to be meaningfully better than conventionally fractionated RT alone. Its role in locally advanced disease should be reevaluated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Power in Bayesian Mediation Analysis for Small Sample Research

    PubMed Central

    Miočević, Milica; MacKinnon, David P.; Levy, Roy

    2018-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296

  4. Power in Bayesian Mediation Analysis for Small Sample Research.

    PubMed

    Miočević, Milica; MacKinnon, David P; Levy, Roy

    2017-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.

  5. Uncertainty in action-value estimation affects both action choice and learning rate of the choice behaviors of rats

    PubMed Central

    Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu

    2012-01-01

    The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20–549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. PMID:22487046

  6. A predictive Bayesian approach to the design and analysis of bridging studies.

    PubMed

    Gould, A Lawrence; Jin, Tian; Zhang, Li Xin; Wang, William W B

    2012-09-01

    Pharmaceutical product development culminates in confirmatory trials whose evidence for the product's efficacy and safety supports regulatory approval for marketing. Regulatory agencies in countries whose patients were not included in the confirmatory trials often require confirmation of efficacy and safety in their patient populations, which may be accomplished by carrying out bridging studies to establish consistency for local patients of the effects demonstrated by the original trials. This article describes and illustrates an approach for designing and analyzing bridging studies that fully incorporates the information provided by the original trials. The approach determines probability contours or regions of joint predictive intervals for treatment effect and response variability, or endpoints of treatment effect confidence intervals, that are functions of the findings from the original trials, the sample sizes for the bridging studies, and possible deviations from complete consistency with the original trials. The bridging studies are judged consistent with the original trials if their findings fall within the probability contours or regions. Regulatory considerations determine the region definitions and appropriate probability levels. Producer and consumer risks provide a way to assess alternative region and probability choices. [Supplemental materials are available for this article. Go to the Publisher's online edition of the Journal of Biopharmaceutical Statistics for the following free supplemental resource: Appendix 2: R code for Calculations.].

  7. Speeded Reaching Movements around Invisible Obstacles

    PubMed Central

    Hudson, Todd E.; Wolfe, Uta; Maloney, Laurence T.

    2012-01-01

    We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions. PMID:23028276

  8. Bayesian enhancement two-stage design for single-arm phase II clinical trials with binary and time-to-event endpoints.

    PubMed

    Shi, Haolun; Yin, Guosheng

    2018-02-21

    Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.

  9. Bayesian network meta-analysis comparing five contemporary treatment strategies for newly diagnosed acute promyelocytic leukaemia

    PubMed Central

    Ren, Yong; Duan, Chongyang; Chen, Shangwu; Xu, Anlong

    2016-01-01

    Acute promyelocytic leukemia (APL) is a curable subtype of acute myeloid leukemia. The optimum regimen for newly diagnosed APL remains inconclusive. In this Bayesian network meta-analysis, we compared the effectiveness of five regimens-arsenic trioxide (ATO) + all-trans retinoic acid (ATRA), realgar-indigo naturalis formula (RIF) which contains arsenic tetrasulfide + ATRA, ATRA + anthracycline-based chemotherapy (CT), ATO alone and ATRA alone, based on fourteen randomized controlled trials (RCTs), which included 1407 newly diagnosed APL patients. According to the results, the ranking efficacy of the treatment, including early death and complete remission in the induction stage, was the following: 1. ATO/RIF + ATRA; 2. ATRA + CT; 3. ATO, and 4. ATRA. For long-term benefit, ATO/RIF + ATRA significantly improved overall survival (OS) (hazard ratio = 0.35, 95%CI 0.15–0.82, p = 0.02) and event-free survival (EFS) (hazard ratio = 0.32, 95%CI 0.16–0.61, p = 0.001) over ATRA + CT regimen for the low-to-intermediate-risk patients. Thus, ATO + ATRA and RIF + ATRA might be considered the optimum treatments for the newly diagnosed APL and should be recommended as the standard care for frontline therapy. PMID:27322078

  10. Bayesian network meta-analysis comparing five contemporary treatment strategies for newly diagnosed acute promyelocytic leukaemia.

    PubMed

    Wu, Fenfang; Wu, Di; Ren, Yong; Duan, Chongyang; Chen, Shangwu; Xu, Anlong

    2016-07-26

    Acute promyelocytic leukemia (APL) is a curable subtype of acute myeloid leukemia. The optimum regimen for newly diagnosed APL remains inconclusive. In this Bayesian network meta-analysis, we compared the effectiveness of five regimens-arsenic trioxide (ATO) + all-trans retinoic acid (ATRA), realgar-indigo naturalis formula (RIF) which contains arsenic tetrasulfide + ATRA, ATRA + anthracycline-based chemotherapy (CT), ATO alone and ATRA alone, based on fourteen randomized controlled trials (RCTs), which included 1407 newly diagnosed APL patients. According to the results, the ranking efficacy of the treatment, including early death and complete remission in the induction stage, was the following: 1. ATO/RIF + ATRA; 2. ATRA + CT; 3. ATO, and 4. ATRA. For long-term benefit, ATO/RIF + ATRA significantly improved overall survival (OS) (hazard ratio = 0.35, 95%CI 0.15-0.82, p = 0.02) and event-free survival (EFS) (hazard ratio = 0.32, 95%CI 0.16-0.61, p = 0.001) over ATRA + CT regimen for the low-to-intermediate-risk patients. Thus, ATO + ATRA and RIF + ATRA might be considered the optimum treatments for the newly diagnosed APL and should be recommended as the standard care for frontline therapy.

  11. Induction Chemotherapy plus Concurrent Chemoradiotherapy in Endemic Nasopharyngeal Carcinoma: Individual Patient Data Pooled Analysis of Four Randomized Trials.

    PubMed

    Chen, Yu-Pei; Tang, Ling-Long; Yang, Qi; Poh, Sharon-Shuxian; Hui, Edwin P; Chan, Anthony T C; Ong, Whee-Sze; Tan, Terence; Wee, Joseph; Li, Wen-Fei; Chen, Lei; Ma, Brigette B Y; Tong, Macy; Tan, Sze-Huey; Cheah, Shie-Lee; Fong, Kam-Weng; Sommat, Kiattisa; Soong, Yoke Lim; Guo, Ying; Lin, Ai-Hua; Sun, Ying; Hong, Ming-Huang; Cao, Su-Mei; Chen, Ming-Yuan; Ma, Jun

    2018-04-15

    Purpose: Because of the uneven geographic distribution and small number of randomized trials available, the value of additional induction chemotherapy (IC) to concurrent chemoradiotherapy (CCRT) in nasopharyngeal carcinoma (NPC) remains controversial. This study performed an individual patient data (IPD) pooled analysis to better assess the precise role of IC + CCRT in locoregionally advanced NPC. Experimental Design: Four randomized trials in endemic areas were identified, representing 1,193 patients; updated IPD were obtained. Progression-free survival (PFS) and overall survival (OS) were the primary and secondary endpoints, respectively. Results: Median follow-up was 5.0 years. The HR for PFS was 0.70 [95% confidence interval (CI), 0.56-0.86; P = 0.0009; 9.3% absolute benefit at 5 years] in favor of IC + CCRT versus CCRT alone. IC + CCRT also improved OS (HR = 0.75; 95% CI, 0.57-0.99; P = 0.04) and reduced distant failure (HR = 0.68; 95% CI, 0.51-0.90; P = 0.008). IC + CCRT had a tendency to improve locoregional control compared with CCRT alone (HR = 0.70; 95% CI, 0.48-1.01; P = 0.06). There was no heterogeneity between trials in any analysis. No interactions between patient characteristics and treatment effects on PFS or OS were found. After adding two supplementary trials to provide a more comprehensive overview, the conclusions remained valid and were strengthened. In a supplementary Bayesian network analysis, no statistically significant differences in survival between different IC regimens were detected. Conclusions: This IPD pooled analysis demonstrates the superiority of additional IC over CCRT alone in locoregionally advanced NPC, with the survival benefit mainly associated with improved distant control. Clin Cancer Res; 24(8); 1824-33. ©2018 AACR . ©2018 American Association for Cancer Research.

  12. Refining cost-effectiveness analyses using the net benefit approach and econometric methods: an example from a trial of anti-depressant treatment.

    PubMed

    Sabes-Figuera, Ramon; McCrone, Paul; Kendricks, Antony

    2013-04-01

    Economic evaluation analyses can be enhanced by employing regression methods, allowing for the identification of important sub-groups and to adjust for imperfect randomisation in clinical trials or to analyse non-randomised data. To explore the benefits of combining regression techniques and the standard Bayesian approach to refine cost-effectiveness analyses using data from randomised clinical trials. Data from a randomised trial of anti-depressant treatment were analysed and a regression model was used to explore the factors that have an impact on the net benefit (NB) statistic with the aim of using these findings to adjust the cost-effectiveness acceptability curves. Exploratory sub-samples' analyses were carried out to explore possible differences in cost-effectiveness. Results The analysis found that having suffered a previous similar depression is strongly correlated with a lower NB, independent of the outcome measure or follow-up point. In patients with previous similar depression, adding an selective serotonin reuptake inhibitors (SSRI) to supportive care for mild-to-moderate depression is probably cost-effective at the level used by the English National Institute for Health and Clinical Excellence to make recommendations. This analysis highlights the need for incorporation of econometric methods into cost-effectiveness analyses using the NB approach.

  13. A Bayesian model for time-to-event data with informative censoring

    PubMed Central

    Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo

    2012-01-01

    Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746

  14. A Model-Based Approach to Trial-By-Trial P300 Amplitude Fluctuations

    PubMed Central

    Kolossa, Antonio; Fingscheidt, Tim; Wessel, Karl; Kopp, Bruno

    2013-01-01

    It has long been recognized that the amplitude of the P300 component of event-related brain potentials is sensitive to the degree to which eliciting stimuli are surprising to the observers (Donchin, 1981). While Squires et al. (1976) showed and modeled dependencies of P300 amplitudes from observed stimuli on various time scales, Mars et al. (2008) proposed a computational model keeping track of stimulus probabilities on a long-term time scale. We suggest here a computational model which integrates prior information with short-term, long-term, and alternation-based experiential influences on P300 amplitude fluctuations. To evaluate the new model, we measured trial-by-trial P300 amplitude fluctuations in a simple two-choice response time task, and tested the computational models of trial-by-trial P300 amplitudes using Bayesian model evaluation. The results reveal that the new digital filtering (DIF) model provides a superior account of the trial-by-trial P300 amplitudes when compared to both Squires et al.’s (1976) model, and Mars et al.’s (2008) model. We show that the P300-generating system can be described as two parallel first-order infinite impulse response (IIR) low-pass filters and an additional fourth-order finite impulse response (FIR) high-pass filter. Implications of the acquired data are discussed with regard to the neurobiological distinction between short-term, long-term, and working memory as well as from the point of view of predictive coding models and Bayesian learning theories of cortical function. PMID:23404628

  15. New-generation stents compared with coronary bypass surgery for unprotected left main disease: A word of caution.

    PubMed

    Benedetto, Umberto; Taggart, David P; Sousa-Uva, Miguel; Biondi-Zoccai, Giuseppe; Di Franco, Antonino; Ohmes, Lucas B; Rahouma, Mohamed; Kamel, Mohamed; Caputo, Massimo; Girardi, Leonard N; Angelini, Gianni D; Gaudino, Mario

    2018-05-01

    With the advent of bare metal stents and drug-eluting stents, percutaneous coronary intervention has emerged as an alternative to coronary artery bypass grafting surgery for unprotected left main disease. However, whether the evolution of stents technology has translated into better results after percutaneous coronary intervention remains unclear. We aimed to compare coronary artery bypass grafting with stents of different generations for left main disease by performing a Bayesian network meta-analysis of available randomized controlled trials. All randomized controlled trials with at least 1 arm randomized to percutaneous coronary intervention with stents or coronary artery bypass grafting for left main disease were included. Bare metal stents and drug-eluting stents of first- and second-generation were compared with coronary artery bypass grafting. Poisson methods and Bayesian framework were used to compute the head-to-head incidence rate ratio and 95% credible intervals. Primary end points were the composite of death/myocardial infarction/stroke and repeat revascularization. Nine randomized controlled trials were included in the final analysis. Six trials compared percutaneous coronary intervention with coronary artery bypass grafting (n = 4654), and 3 trials compared different types of stents (n = 1360). Follow-up ranged from 6 months to 5 years. Second-generation drug-eluting stents (incidence rate ratio, 1.3; 95% credible interval, 1.1-1.6), but not bare metal stents (incidence rate ratio, 0.63; 95% credible interval, 0.27-1.4), and first-generation drug-eluting stents (incidence rate ratio, 0.85; 95% credible interval, 0.65-1.1) were associated with a significantly increased risk of death/myocardial infarction/stroke when compared with coronary artery bypass grafting. When compared with coronary artery bypass grafting, the highest risk of repeat revascularization was observed for bare metal stents (hazard ratio, 5.1; 95% confidence interval, 2.1-14), whereas first-generation drug-eluting stents (incidence rate ratio, 1.8; 95% confidence interval, 1.4-2.4) and second-generation drug-eluting stents (incidence rate ratio, 1.8; 95% confidence interval, 1.4-2.4) were comparable. The introduction of new-generation drug-eluting stents did not translate into better outcomes for percutaneous coronary intervention when compared with coronary artery bypass grafting. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  16. Treatment strategies for coronary in-stent restenosis: systematic review and hierarchical Bayesian network meta-analysis of 24 randomised trials and 4880 patients

    PubMed Central

    Giacoppo, Daniele; Gargiulo, Giuseppe; Aruta, Patrizia; Capranzano, Piera; Tamburino, Corrado

    2015-01-01

    Study question What is the most safe and effective interventional treatment for coronary in-stent restenosis? Methods In a hierarchical Bayesian network meta-analysis, PubMed, Embase, Scopus, Cochrane Library, Web of Science, ScienceDirect, and major scientific websites were screened up to 10 August 2015. Randomised controlled trials of patients with any type of coronary in-stent restenosis (either of bare metal stents or drug eluting stents; and either first or recurrent instances) were included. Trials including multiple treatments at the same time in the same group or comparing variants of the same intervention were excluded. Primary endpoints were target lesion revascularisation and late lumen loss, both at six to 12 months. The main analysis was complemented by network subanalyses, standard pairwise comparisons, and subgroup and sensitivity analyses. Study answer and limitations Twenty four trials (4880 patients), including seven interventional treatments, were identified. Compared with plain balloons, bare metal stents, brachytherapy, rotational atherectomy, and cutting balloons, drug coated balloons and drug eluting stents were associated with a reduced risk of target lesion revascularisation and major adverse cardiac events, and with reduced late lumen loss. Treatment ranking indicated that drug eluting stents had the highest probability (61.4%) of being the most effective for target lesion vascularisation; drug coated balloons were similarly indicated as the most effective treatment for late lumen loss (probability 70.3%). The comparative efficacy of drug coated balloons and drug eluting stents was similar for target lesion revascularisation (summary odds ratio 1.10, 95% credible interval 0.59 to 2.01) and late lumen loss reduction (mean difference in minimum lumen diameter 0.04 mm, 95% credible interval −0.20 to 0.10). Risks of death, myocardial infarction, and stent thrombosis were comparable across all treatments, but these analyses were limited by a low number of events. Trials had heterogeneity regarding investigation periods, baseline characteristics, and endpoint reporting, with a lack of information at long term follow-up. Direct and indirect evidence was also inconsistent for the comparison between drug eluting stents and drug coated balloons. What this study adds Compared with other currently available interventional treatments for coronary in-stent restenosis, drug coated balloons and drug eluting stents are associated with superior clinical and angiographic outcomes, with a similar comparative efficacy. Funding, competing interests, data sharing This study received no external funding. The authors declare no competing interests. No additional data available. PMID:26537292

  17. Moving beyond qualitative evaluations of Bayesian models of cognition.

    PubMed

    Hemmer, Pernille; Tauber, Sean; Steyvers, Mark

    2015-06-01

    Bayesian models of cognition provide a powerful way to understand the behavior and goals of individuals from a computational point of view. Much of the focus in the Bayesian cognitive modeling approach has been on qualitative model evaluations, where predictions from the models are compared to data that is often averaged over individuals. In many cognitive tasks, however, there are pervasive individual differences. We introduce an approach to directly infer individual differences related to subjective mental representations within the framework of Bayesian models of cognition. In this approach, Bayesian data analysis methods are used to estimate cognitive parameters and motivate the inference process within a Bayesian cognitive model. We illustrate this integrative Bayesian approach on a model of memory. We apply the model to behavioral data from a memory experiment involving the recall of heights of people. A cross-validation analysis shows that the Bayesian memory model with inferred subjective priors predicts withheld data better than a Bayesian model where the priors are based on environmental statistics. In addition, the model with inferred priors at the individual subject level led to the best overall generalization performance, suggesting that individual differences are important to consider in Bayesian models of cognition.

  18. A Preliminary Bayesian Analysis of Incomplete Longitudinal Data from a Small Sample: Methodological Advances in an International Comparative Study of Educational Inequality

    ERIC Educational Resources Information Center

    Hsieh, Chueh-An; Maier, Kimberly S.

    2009-01-01

    The capacity of Bayesian methods in estimating complex statistical models is undeniable. Bayesian data analysis is seen as having a range of advantages, such as an intuitive probabilistic interpretation of the parameters of interest, the efficient incorporation of prior information to empirical data analysis, model averaging and model selection.…

  19. A two-stage patient enrichment adaptive design in phase II oncology trials.

    PubMed

    Song, James X

    2014-01-01

    Illustrated is the use of a patient enrichment adaptive design in a randomized phase II trial which allows the evaluation of treatment benefits by the biomarker expression level and makes interim adjustment according to the pre-specified rules. The design was applied to an actual phase II metastatic hepatocellular carcinoma (HCC) trial in which progression-free survival (PFS) in two biomarker-defined populations is evaluated at both interim and final analyses. As an extension, a short-term biomarker is used to predict the long-term PFS in a Bayesian model in order to improve the precision of hazard ratio (HR) estimate at the interim analysis. The characteristics of the extended design are examined in a number of scenarios via simulations. The recommended adaptive design is shown to be useful in a phase II setting. When a short-term maker which correlates with the long-term PFS is available, the design can be applied in smaller early phase trials in which PFS requires longer follow-up. In summary, the adaptive design offers flexibility in randomized phase II patient enrichment trials and should be considered in an overall personalized healthcare (PHC) strategy. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. On Models for Binomial Data with Random Numbers of Trials

    PubMed Central

    Comulada, W. Scott; Weiss, Robert E.

    2010-01-01

    Summary A binomial outcome is a count s of the number of successes out of the total number of independent trials n = s + f, where f is a count of the failures. The n are random variables not fixed by design in many studies. Joint modeling of (s, f) can provide additional insight into the science and into the probability π of success that cannot be directly incorporated by the logistic regression model. Observations where n = 0 are excluded from the binomial analysis yet may be important to understanding how π is influenced by covariates. Correlation between s and f may exist and be of direct interest. We propose Bayesian multivariate Poisson models for the bivariate response (s, f), correlated through random effects. We extend our models to the analysis of longitudinal and multivariate longitudinal binomial outcomes. Our methodology was motivated by two disparate examples, one from teratology and one from an HIV tertiary intervention study. PMID:17688514

  1. Using indirect comparisons to compare interventions within a Cochrane review: a tool for comparative effectiveness research.

    PubMed

    Agapova, Maria; Devine, Emily B; Nguyen, Hiep; Wolf, Fredric M; Inoue, Lurdes Y T

    2014-07-01

    Assessing relative performance among competing interventions is an important part of comparative effectiveness research. Bayesian indirect comparisons add information to existing Cochrane reviews, such as which intervention is likely to perform best. However, heterogeneity variance priors may influence results and, potentially, clinical guidance. We highlight the features of Bayesian indirect comparisons using a case study of a Cochrane review update in asthma care. The probability that one self-management educational intervention outperforms others is estimated. Simulation studies investigate the effect of heterogeneity variance prior distributions. Results suggest a 55% probability that individual education is best, followed by combination (39%) and group (6%). The intervention with few trials was sensitive to prior distributions. Bayesian indirect comparisons updates of Cochrane reviews are valuable comparative effectiveness research tools.

  2. Real medical benefit assessed by indirect comparison.

    PubMed

    Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François

    2009-01-01

    Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.

  3. Nicotinamide and skin cancer chemoprevention: The jury is still out.

    PubMed

    Gilmore, Stephen J

    2018-02-01

    Following the publication of the results of a Phase III trial, the administration of oral nicotinamide has been widely advocated as effective in non-melanoma skin cancer chemoprevention in high-risk individuals. However, I performed a Bayesian analysis of the reported findings and show there is insufficient evidence to demonstrate its efficacy, highlighting the significant probability that the positive conclusions drawn will not be reproducible. Given the potential widespread use of oral nicotinamide, future position statements regarding its efficacy are likely to require higher standards of evidence. © 2017 The Australasian College of Dermatologists.

  4. Informative Bayesian Type A uncertainty evaluation, especially applicable to a small number of observations

    NASA Astrophysics Data System (ADS)

    Cox, M.; Shirono, K.

    2017-10-01

    A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.

  5. Double versus single stenting for coronary bifurcation lesions: a meta-analysis.

    PubMed

    Katritsis, Demosthenes G; Siontis, George C M; Ioannidis, John P A

    2009-10-01

    Several trials have addressed whether bifurcation lesions require stenting of both the main vessel and side branch, but uncertainty remains on the benefits of such double versus single stenting of the main vessel only. We have conducted a meta-analysis of randomized trials including patients with coronary bifurcation lesions who were randomly selected to undergo percutaneous coronary intervention by either double or single stenting. Six studies (n=1642 patients) were eligible. There was increased risk of myocardial infarction with double stenting (risk ratio, 1.78; P=0.001 by fixed effects; risk ratio, 1.49 with Bayesian meta-analysis). The summary point estimate suggested also an increased risk of stent thrombosis with double stenting, but the difference was not nominally significant given the sparse data (risk ratio, 1.85; P=0.19). No obvious difference was seen for death (risk ratio, 0.81; P=0.66) and target lesion revascularization (risk ratio, 1.09; P=0.67). Stenting of both the main vessel and side branch in bifurcation lesions may increase myocardial infarction and stent thrombosis risk compared with stenting of the main vessel only.

  6. Semiparametric Bayesian commensurate survival model for post-market medical device surveillance with non-exchangeable historical data.

    PubMed

    Murray, Thomas A; Hobbs, Brian P; Lystig, Theodore C; Carlin, Bradley P

    2014-03-01

    Trial investigators often have a primary interest in the estimation of the survival curve in a population for which there exists acceptable historical information from which to borrow strength. However, borrowing strength from a historical trial that is non-exchangeable with the current trial can result in biased conclusions. In this article we propose a fully Bayesian semiparametric method for the purpose of attenuating bias and increasing efficiency when jointly modeling time-to-event data from two possibly non-exchangeable sources of information. We illustrate the mechanics of our methods by applying them to a pair of post-market surveillance datasets regarding adverse events in persons on dialysis that had either a bare metal or drug-eluting stent implanted during a cardiac revascularization surgery. We finish with a discussion of the advantages and limitations of this approach to evidence synthesis, as well as directions for future work in this area. The article's Supplementary Materials offer simulations to show our procedure's bias, mean squared error, and coverage probability properties in a variety of settings. © 2013, The International Biometric Society.

  7. Mechanisms of motivational interviewing in health promotion: a Bayesian mediation analysis

    PubMed Central

    2012-01-01

    Background Counselor behaviors that mediate the efficacy of motivational interviewing (MI) are not well understood, especially when applied to health behavior promotion. We hypothesized that client change talk mediates the relationship between counselor variables and subsequent client behavior change. Methods Purposeful sampling identified individuals from a prospective randomized worksite trial using an MI intervention to promote firefighters’ healthy diet and regular exercise that increased dietary intake of fruits and vegetables (n = 21) or did not increase intake of fruits and vegetables (n = 22). MI interactions were coded using the Motivational Interviewing Skill Code (MISC 2.1) to categorize counselor and firefighter verbal utterances. Both Bayesian and frequentist mediation analyses were used to investigate whether client change talk mediated the relationship between counselor skills and behavior change. Results Counselors’ global spirit, empathy, and direction and MI-consistent behavioral counts (e.g., reflections, open questions, affirmations, emphasize control) significantly correlated with firefighters’ total client change talk utterances (rs = 0.42, 0.40, 0.30, and 0.61, respectively), which correlated significantly with their fruit and vegetable intake increase (r = 0.33). Both Bayesian and frequentist mediation analyses demonstrated that findings were consistent with hypotheses, such that total client change talk mediated the relationship between counselor’s skills—MI-consistent behaviors [Bayesian mediated effect: αβ = .06 (.03), 95% CI = .02, .12] and MI spirit [Bayesian mediated effect: αβ = .06 (.03), 95% CI = .01, .13]—and increased fruit and vegetable consumption. Conclusion Motivational interviewing is a resource- and time-intensive intervention, and is currently being applied in many arenas. Previous research has identified the importance of counselor behaviors and client change talk in the treatment of substance use disorders. Our results indicate that similar mechanisms may underlie the effects of MI for dietary change. These results inform MI training and application by identifying those processes critical for MI success in health promotion domains. PMID:22681874

  8. Implementation of a Bayesian design in a dose-escalation study of an experimental agent in healthy volunteers.

    PubMed

    Zhou, Yinghui; Whitehead, John; Korhonen, Pasi; Mustonen, Mika

    2008-03-01

    Bayesian decision procedures have recently been developed for dose escalation in phase I clinical trials concerning pharmacokinetic responses observed in healthy volunteers. This article describes how that general methodology was extended and evaluated for implementation in a specific phase I trial of a novel compound. At the time of writing, the study is ongoing, and it will be some time before the sponsor will wish to put the results into the public domain. This article is an account of how the study was designed in a way that should prove to be safe, accurate, and efficient whatever the true nature of the compound. The study involves the observation of two pharmacokinetic endpoints relating to the plasma concentration of the compound itself and of a metabolite as well as a safety endpoint relating to the occurrence of adverse events. Construction of the design and its evaluation via simulation are presented.

  9. An Exploratory Study Examining the Feasibility of Using Bayesian Networks to Predict Circuit Analysis Understanding

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Dionne, Gary B.; Kaiser, William J.

    2006-01-01

    Our research question was whether we could develop a feasible technique, using Bayesian networks, to diagnose gaps in student knowledge. Thirty-four college-age participants completed tasks designed to measure conceptual knowledge, procedural knowledge, and problem-solving skills related to circuit analysis. A Bayesian network was used to model…

  10. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  11. Using SPM 12’s Second-Level Bayesian Inference Procedure for fMRI Analysis: Practical Guidelines for End Users

    PubMed Central

    Han, Hyemin; Park, Joonsuk

    2018-01-01

    Recent debates about the conventional traditional threshold used in the fields of neuroscience and psychology, namely P < 0.05, have spurred researchers to consider alternative ways to analyze fMRI data. A group of methodologists and statisticians have considered Bayesian inference as a candidate methodology. However, few previous studies have attempted to provide end users of fMRI analysis tools, such as SPM 12, with practical guidelines about how to conduct Bayesian inference. In the present study, we aim to demonstrate how to utilize Bayesian inference, Bayesian second-level inference in particular, implemented in SPM 12 by analyzing fMRI data available to public via NeuroVault. In addition, to help end users understand how Bayesian inference actually works in SPM 12, we examine outcomes from Bayesian second-level inference implemented in SPM 12 by comparing them with those from classical second-level inference. Finally, we provide practical guidelines about how to set the parameters for Bayesian inference and how to interpret the results, such as Bayes factors, from the inference. We also discuss the practical and philosophical benefits of Bayesian inference and directions for future research. PMID:29456498

  12. An introduction to Bayesian statistics in health psychology.

    PubMed

    Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske

    2017-09-01

    The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.

  13. Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package

    PubMed Central

    Ahn, Woo-Young; Haines, Nathaniel; Zhang, Lei

    2017-01-01

    Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations. PMID:29601060

  14. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  15. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2012-01-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for…

  16. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes.

    PubMed

    Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Friston, Karl

    2015-10-01

    Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a "limited offer" game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. © The Author 2014. Published by Oxford University Press.

  17. The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Friston, Karl

    2015-01-01

    Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a “limited offer” game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. PMID:25056572

  18. Comparative efficacy and safety of tocilizumab, rituximab, abatacept and tofacitinib in patients with active rheumatoid arthritis that inadequately responds to tumor necrosis factor inhibitors: a Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Lee, Young Ho; Bae, Sang-Cheol

    2016-11-01

    This study aimed to assess the relative efficacy and safety of biologics and tofacitinib in patients with rheumatoid arthritis (RA) showing an inadequate response to tumor necrosis factor (TNF) inhibitors. We performed a Bayesian network meta-analysis to combine the direct and indirect evidence from randomized controlled trials (RCTs) examining the efficacy and safety of tocilizumab, rituximab, abatacept and tofacitinib in patients with RA that inadequately responds to TNF inhibitors. Four RCTs including 1796 patients met the inclusion criteria. The tocilizumab 8 mg group showed a significantly higher American College of Rheumatology 20% (ACR20) response rate than the abatacept and tofacitinib groups. Ranking probability based on surface under the cumulative ranking curve (SUCRA) indicated that tocilizumab 8 mg had the highest probability of being the best treatment for achieving the ACR20 response rate (SUCRA = 0.9863), followed by rituximab (SUCRA = 0.6623), abatacept (SUCRA = 0.5428), tocilizumab 4 mg (SUCRA = 0.4956), tofacitinib 10 mg (SUCRA = 0.4715), tofacitinib 5 mg (SUCRA = 0.3415) and placebo (SUCRA = 0). In contrast, the safety based on the number of withdrawals due to adverse events did not differ significantly among the treatment options. Tocilizumab 8 mg was the second-line non-TNF biologic with the highest performance regarding an early good response based on ACR20 response rate and acceptable safety profile, followed by rituximab, abatacept and tofacitinib in patients with RA and an inadequate response to anti-TNF therapy, and none of these treatments were associated with a significant risk of withdrawal due to adverse events. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.

  19. BAYESIAN META-ANALYSIS ON MEDICAL DEVICES: APPLICATION TO IMPLANTABLE CARDIOVERTER DEFIBRILLATORS

    PubMed Central

    Youn, Ji-Hee; Lord, Joanne; Hemming, Karla; Girling, Alan; Buxton, Martin

    2012-01-01

    Objectives: The aim of this study is to describe and illustrate a method to obtain early estimates of the effectiveness of a new version of a medical device. Methods: In the absence of empirical data, expert opinion may be elicited on the expected difference between the conventional and modified devices. Bayesian Mixed Treatment Comparison (MTC) meta-analysis can then be used to combine this expert opinion with existing trial data on earlier versions of the device. We illustrate this approach for a new four-pole implantable cardioverter defibrillator (ICD) compared with conventional ICDs, Class III anti-arrhythmic drugs, and conventional drug therapy for the prevention of sudden cardiac death in high risk patients. Existing RCTs were identified from a published systematic review, and we elicited opinion on the difference between four-pole and conventional ICDs from experts recruited at a cardiology conference. Results: Twelve randomized controlled trials were identified. Seven experts provided valid probability distributions for the new ICDs compared with current devices. The MTC model resulted in estimated relative risks of mortality of 0.74 (0.60–0.89) (predictive relative risk [RR] = 0.77 [0.41–1.26]) and 0.83 (0.70–0.97) (predictive RR = 0.84 [0.55–1.22]) with the new ICD therapy compared to Class III anti-arrhythmic drug therapy and conventional drug therapy, respectively. These results showed negligible differences from the preliminary results for the existing ICDs. Conclusions: The proposed method incorporating expert opinion to adjust for a modification made to an existing device may play a useful role in assisting decision makers to make early informed judgments on the effectiveness of frequently modified healthcare technologies. PMID:22559753

  20. Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization.

    PubMed

    Nishio, Mizuho; Nishizawa, Mitsuo; Sugiyama, Osamu; Kojima, Ryosuke; Yakami, Masahiro; Kuroda, Tomohiro; Togashi, Kaori

    2018-01-01

    We aimed to evaluate a computer-aided diagnosis (CADx) system for lung nodule classification focussing on (i) usefulness of the conventional CADx system (hand-crafted imaging feature + machine learning algorithm), (ii) comparison between support vector machine (SVM) and gradient tree boosting (XGBoost) as machine learning algorithms, and (iii) effectiveness of parameter optimization using Bayesian optimization and random search. Data on 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of the local binary pattern was used for calculating a feature vector. SVM or XGBoost was trained using the feature vector and its corresponding label. Tree Parzen Estimator (TPE) was used as Bayesian optimization for parameters of SVM and XGBoost. Random search was done for comparison with TPE. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost was 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. Bayesian optimization of SVM and XGBoost parameters was more efficient than random search. Based on observer study, AUC values of two board-certified radiologists were 0.898 and 0.822. The results show that diagnostic accuracy of our CADx system was comparable to that of radiologists with respect to classifying lung nodules.

  1. Uncertainty in action-value estimation affects both action choice and learning rate of the choice behaviors of rats.

    PubMed

    Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu

    2012-04-01

    The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20-549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  2. Careful with Those Priors: A Note on Bayesian Estimation in Two-Parameter Logistic Item Response Theory Models

    ERIC Educational Resources Information Center

    Marcoulides, Katerina M.

    2018-01-01

    This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…

  3. A bayesian approach to classification criteria for spectacled eiders

    USGS Publications Warehouse

    Taylor, B.L.; Wade, P.R.; Stehn, R.A.; Cochrane, J.F.

    1996-01-01

    To facilitate decisions to classify species according to risk of extinction, we used Bayesian methods to analyze trend data for the Spectacled Eider, an arctic sea duck. Trend data from three independent surveys of the Yukon-Kuskokwim Delta were analyzed individually and in combination to yield posterior distributions for population growth rates. We used classification criteria developed by the recovery team for Spectacled Eiders that seek to equalize errors of under- or overprotecting the species. We conducted both a Bayesian decision analysis and a frequentist (classical statistical inference) decision analysis. Bayesian decision analyses are computationally easier, yield basically the same results, and yield results that are easier to explain to nonscientists. With the exception of the aerial survey analysis of the 10 most recent years, both Bayesian and frequentist methods indicated that an endangered classification is warranted. The discrepancy between surveys warrants further research. Although the trend data are abundance indices, we used a preliminary estimate of absolute abundance to demonstrate how to calculate extinction distributions using the joint probability distributions for population growth rate and variance in growth rate generated by the Bayesian analysis. Recent apparent increases in abundance highlight the need for models that apply to declining and then recovering species.

  4. Neoadjuvant treatments for locally advanced, resectable esophageal cancer: A network meta-analysis.

    PubMed

    Chan, Kelvin K W; Saluja, Ronak; Delos Santos, Keemo; Lien, Kelly; Shah, Keya; Cramarossa, Gemma; Zhu, Xiaofu; Wong, Rebecca K S

    2018-02-14

    The relative survival benefits and postoperative mortality among the different types of neoadjuvant treatments (such as chemotherapy only, radiotherapy only or chemoradiotherapy) for esophageal cancer patients are not well established. To evaluate the relative efficacy and safety of neoadjuvant therapies in resectable esophageal cancer, a Bayesian network meta-analysis was performed. MEDLINE, EMBASE and the Cochrane Central Register of Controlled Trials were searched for publications up to May 2016. ASCO and ASTRO annual meeting abstracts were also searched up to the 2015 conferences. Randomized controlled trials that compared at least two of the following treatments for resectable esophageal cancer were included: surgery alone, surgery preceded by neoadjuvant chemotherapy, neoadjuvant radiotherapy or neoadjuvant chemoradiotherapy. The primary outcome assessed from the trials was overall survival. Thirty-one randomized controlled trials involving 5496 patients were included in the quantitative analysis. The network meta-analysis showed that neoadjuvant chemoradiotherapy improved overall survival when compared to all other treatments including surgery alone (HR 0.75, 95% CR 0.67-0.85), neoadjuvant chemotherapy (HR 0.83. 95% CR 0.70-0.96) and neoadjuvant radiotherapy (HR 0.82, 95% CR 0.67-0.99). However, the risk of postoperative mortality increased when comparing neoadjuvant chemoradiotherapy to either surgery alone (RR 1.46, 95% CR 1.00-2.14) or to neoadjuvant chemotherapy (RR 1.58, 95% CR 1.00-2.49). In conclusion, neoadjuvant chemoradiotherapy improves overall survival but may also increase the risk of postoperative mortality in patients locally advanced resectable esophageal carcinoma. © 2018 UICC.

  5. CytoBayesJ: software tools for Bayesian analysis of cytogenetic radiation dosimetry data.

    PubMed

    Ainsbury, Elizabeth A; Vinnikov, Volodymyr; Puig, Pedro; Maznyk, Nataliya; Rothkamm, Kai; Lloyd, David C

    2013-08-30

    A number of authors have suggested that a Bayesian approach may be most appropriate for analysis of cytogenetic radiation dosimetry data. In the Bayesian framework, probability of an event is described in terms of previous expectations and uncertainty. Previously existing, or prior, information is used in combination with experimental results to infer probabilities or the likelihood that a hypothesis is true. It has been shown that the Bayesian approach increases both the accuracy and quality assurance of radiation dose estimates. New software entitled CytoBayesJ has been developed with the aim of bringing Bayesian analysis to cytogenetic biodosimetry laboratory practice. CytoBayesJ takes a number of Bayesian or 'Bayesian like' methods that have been proposed in the literature and presents them to the user in the form of simple user-friendly tools, including testing for the most appropriate model for distribution of chromosome aberrations and calculations of posterior probability distributions. The individual tools are described in detail and relevant examples of the use of the methods and the corresponding CytoBayesJ software tools are given. In this way, the suitability of the Bayesian approach to biological radiation dosimetry is highlighted and its wider application encouraged by providing a user-friendly software interface and manual in English and Russian. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Using Bayesian analysis in repeated preclinical in vivo studies for a more effective use of animals.

    PubMed

    Walley, Rosalind; Sherington, John; Rastrick, Joe; Detrait, Eric; Hanon, Etienne; Watt, Gillian

    2016-05-01

    Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta-analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study-to-study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide-induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the "3Rs initiative" to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Bayesian linkage and segregation analysis: factoring the problem.

    PubMed

    Matthysse, S

    2000-01-01

    Complex segregation analysis and linkage methods are mathematical techniques for the genetic dissection of complex diseases. They are used to delineate complex modes of familial transmission and to localize putative disease susceptibility loci to specific chromosomal locations. The computational problem of Bayesian linkage and segregation analysis is one of integration in high-dimensional spaces. In this paper, three available techniques for Bayesian linkage and segregation analysis are discussed: Markov Chain Monte Carlo (MCMC), importance sampling, and exact calculation. The contribution of each to the overall integration will be explicitly discussed.

  8. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    NASA Astrophysics Data System (ADS)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  9. A Primer on Bayesian Analysis for Experimental Psychopathologists

    PubMed Central

    Krypotos, Angelos-Miltiadis; Blanken, Tessa F.; Arnaudova, Inna; Matzke, Dora; Beckers, Tom

    2016-01-01

    The principal goals of experimental psychopathology (EPP) research are to offer insights into the pathogenic mechanisms of mental disorders and to provide a stable ground for the development of clinical interventions. The main message of the present article is that those goals are better served by the adoption of Bayesian statistics than by the continued use of null-hypothesis significance testing (NHST). In the first part of the article we list the main disadvantages of NHST and explain why those disadvantages limit the conclusions that can be drawn from EPP research. Next, we highlight the advantages of Bayesian statistics. To illustrate, we then pit NHST and Bayesian analysis against each other using an experimental data set from our lab. Finally, we discuss some challenges when adopting Bayesian statistics. We hope that the present article will encourage experimental psychopathologists to embrace Bayesian statistics, which could strengthen the conclusions drawn from EPP research. PMID:28748068

  10. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    PubMed

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  11. Testing students’ e-learning via Facebook through Bayesian structural equation modeling

    PubMed Central

    Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019

  12. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  13. Bayesian learning for spatial filtering in an EEG-based brain-computer interface.

    PubMed

    Zhang, Haihong; Yang, Huijuan; Guan, Cuntai

    2013-07-01

    Spatial filtering for EEG feature extraction and classification is an important tool in brain-computer interface. However, there is generally no established theory that links spatial filtering directly to Bayes classification error. To address this issue, this paper proposes and studies a Bayesian analysis theory for spatial filtering in relation to Bayes error. Following the maximum entropy principle, we introduce a gamma probability model for describing single-trial EEG power features. We then formulate and analyze the theoretical relationship between Bayes classification error and the so-called Rayleigh quotient, which is a function of spatial filters and basically measures the ratio in power features between two classes. This paper also reports our extensive study that examines the theory and its use in classification, using three publicly available EEG data sets and state-of-the-art spatial filtering techniques and various classifiers. Specifically, we validate the positive relationship between Bayes error and Rayleigh quotient in real EEG power features. Finally, we demonstrate that the Bayes error can be practically reduced by applying a new spatial filter with lower Rayleigh quotient.

  14. The comparative evidence basis for the efficacy of second-generation antidepressants in the treatment of depression in the US: A Bayesian meta-analysis of Food and Drug Administration reviews.

    PubMed

    Monden, Rei; Roest, Annelieke M; van Ravenzwaaij, Don; Wagenmakers, Eric-Jan; Morey, Richard; Wardenaar, Klaas J; de Jonge, Peter

    2018-08-01

    Studies have shown similar efficacy of different antidepressants in the treatment of depression. Data of phase-2 and -3 clinical-trials for 16 antidepressants (levomilnacipran, desvenlafaxine, duloxetine, venlafaxine, paroxetine, escitalopram, vortioxetine, mirtazapine, venlafaxine XR, sertraline, fluoxetine, citalopram, paroxetine CR, nefazodone, bupropion, vilazodone), approved by the FDA for the treatment of depression between 1987 and 2016, were extracted from the FDA reviews that were used to evaluate efficacy prior to marketing approval, which are less liable to reporting biases. Meta-analytic Bayes factors, which quantify the strength of evidence for efficacy, were calculated. In addition, posterior pooled effect-sizes were calculated and compared with classical estimations. The resulted Bayes factors showed that the evidence load for efficacy varied strongly across antidepressants. However, all tested drugs except for bupropion and vilazodone showed strong evidence for their efficacy. The posterior effect-size distributions showed variation across antidepressants, with the highest pooled estimated effect size for venlafaxine followed by paroxetine, and the lowest for bupropion and vilazodone. Not all published trials were included in the study. The results illustrate the importance of considering both the effect size and the evidence-load when judging the efficacy of a treatment. In doing so, the currently employed Bayesian approach provided clear insights on top of those gained with traditional approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Bayesian network meta-analysis of root coverage procedures: ranking efficacy and identification of best treatment.

    PubMed

    Buti, Jacopo; Baccini, Michela; Nieri, Michele; La Marca, Michele; Pini-Prato, Giovan P

    2013-04-01

    The aim of this work was to conduct a Bayesian network meta-analysis (NM) of randomized controlled trials (RCTs) to establish a ranking in efficacy and the best technique for coronally advanced flap (CAF)-based root coverage procedures. A literature search on PubMed, Cochrane libraries, EMBASE, and hand-searched journals until June 2012 was conducted to identify RCTs on treatments of Miller Class I and II gingival recessions with at least 6 months of follow-up. The treatment outcomes were recession reduction (RecRed), clinical attachment gain (CALgain), keratinized tissue gain (KTgain), and complete root coverage (CRC). Twenty-nine studies met the inclusion criteria, 20 of which were classified as at high risk of bias. The CAF+connective tissue graft (CTG) combination ranked highest in effectiveness for RecRed (Probability of being the best = 40%) and CALgain (Pr = 33%); CAF+enamel matrix derivative (EMD) was slightly better for CRC; CAF+Collagen Matrix (CM) appeared effective for KTgain (Pr = 69%). Network inconsistency was low for all outcomes excluding CALgain. CAF+CTG might be considered the gold standard in root coverage procedures. The low amount of inconsistency gives support to the reliability of the present findings. © 2012 John Wiley & Sons A/S.

  16. [Bayesian statistics in medicine -- part II: main applications and inference].

    PubMed

    Montomoli, C; Nichelatti, M

    2008-01-01

    Bayesian statistics is not only used when one is dealing with 2-way tables, but it can be used for inferential purposes. Using the basic concepts presented in the first part, this paper aims to give a simple overview of Bayesian methods by introducing its foundation (Bayes' theorem) and then applying this rule to a very simple practical example; whenever possible, the elementary processes at the basis of analysis are compared to those of frequentist (classical) statistical analysis. The Bayesian reasoning is naturally connected to medical activity, since it appears to be quite similar to a diagnostic process.

  17. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function.

    PubMed

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.

  18. Statistical modeling for Bayesian extrapolation of adult clinical trial information in pediatric drug evaluation.

    PubMed

    Gamalo-Siebers, Margaret; Savic, Jasmina; Basu, Cynthia; Zhao, Xin; Gopalakrishnan, Mathangi; Gao, Aijun; Song, Guochen; Baygani, Simin; Thompson, Laura; Xia, H Amy; Price, Karen; Tiwari, Ram; Carlin, Bradley P

    2017-07-01

    Children represent a large underserved population of "therapeutic orphans," as an estimated 80% of children are treated off-label. However, pediatric drug development often faces substantial challenges, including economic, logistical, technical, and ethical barriers, among others. Among many efforts trying to remove these barriers, increased recent attention has been paid to extrapolation; that is, the leveraging of available data from adults or older age groups to draw conclusions for the pediatric population. The Bayesian statistical paradigm is natural in this setting, as it permits the combining (or "borrowing") of information across disparate sources, such as the adult and pediatric data. In this paper, authored by the pediatric subteam of the Drug Information Association Bayesian Scientific Working Group and Adaptive Design Working Group, we develop, illustrate, and provide suggestions on Bayesian statistical methods that could be used to design improved pediatric development programs that use all available information in the most efficient manner. A variety of relevant Bayesian approaches are described, several of which are illustrated through 2 case studies: extrapolating adult efficacy data to expand the labeling for Remicade to include pediatric ulcerative colitis and extrapolating adult exposure-response information for antiepileptic drugs to pediatrics. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Clinical Outcome Prediction in Aneurysmal Subarachnoid Hemorrhage Using Bayesian Neural Networks with Fuzzy Logic Inferences

    PubMed Central

    Lo, Benjamin W. Y.; Macdonald, R. Loch; Baker, Andrew; Levine, Mitchell A. H.

    2013-01-01

    Objective. The novel clinical prediction approach of Bayesian neural networks with fuzzy logic inferences is created and applied to derive prognostic decision rules in cerebral aneurysmal subarachnoid hemorrhage (aSAH). Methods. The approach of Bayesian neural networks with fuzzy logic inferences was applied to data from five trials of Tirilazad for aneurysmal subarachnoid hemorrhage (3551 patients). Results. Bayesian meta-analyses of observational studies on aSAH prognostic factors gave generalizable posterior distributions of population mean log odd ratios (ORs). Similar trends were noted in Bayesian and linear regression ORs. Significant outcome predictors include normal motor response, cerebral infarction, history of myocardial infarction, cerebral edema, history of diabetes mellitus, fever on day 8, prior subarachnoid hemorrhage, admission angiographic vasospasm, neurological grade, intraventricular hemorrhage, ruptured aneurysm size, history of hypertension, vasospasm day, age and mean arterial pressure. Heteroscedasticity was present in the nontransformed dataset. Artificial neural networks found nonlinear relationships with 11 hidden variables in 1 layer, using the multilayer perceptron model. Fuzzy logic decision rules (centroid defuzzification technique) denoted cut-off points for poor prognosis at greater than 2.5 clusters. Discussion. This aSAH prognostic system makes use of existing knowledge, recognizes unknown areas, incorporates one's clinical reasoning, and compensates for uncertainty in prognostication. PMID:23690884

  20. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function

    PubMed Central

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061

  1. Adaptive Randomization of Neratinib in Early Breast Cancer.

    PubMed

    Park, John W; Liu, Minetta C; Yee, Douglas; Yau, Christina; van 't Veer, Laura J; Symmans, W Fraser; Paoloni, Melissa; Perlmutter, Jane; Hylton, Nola M; Hogarth, Michael; DeMichele, Angela; Buxton, Meredith B; Chien, A Jo; Wallace, Anne M; Boughey, Judy C; Haddad, Tufia C; Chui, Stephen Y; Kemmer, Kathleen A; Kaplan, Henry G; Isaacs, Claudine; Nanda, Rita; Tripathy, Debasish; Albain, Kathy S; Edmiston, Kirsten K; Elias, Anthony D; Northfelt, Donald W; Pusztai, Lajos; Moulder, Stacy L; Lang, Julie E; Viscusi, Rebecca K; Euhus, David M; Haley, Barbara B; Khan, Qamar J; Wood, William C; Melisko, Michelle; Schwab, Richard; Helsten, Teresa; Lyandres, Julia; Davis, Sarah E; Hirst, Gillian L; Sanil, Ashish; Esserman, Laura J; Berry, Donald A

    2016-07-07

    The heterogeneity of breast cancer makes identifying effective therapies challenging. The I-SPY 2 trial, a multicenter, adaptive phase 2 trial of neoadjuvant therapy for high-risk clinical stage II or III breast cancer, evaluated multiple new agents added to standard chemotherapy to assess the effects on rates of pathological complete response (i.e., absence of residual cancer in the breast or lymph nodes at the time of surgery). We used adaptive randomization to compare standard neoadjuvant chemotherapy plus the tyrosine kinase inhibitor neratinib with control. Eligible women were categorized according to eight biomarker subtypes on the basis of human epidermal growth factor receptor 2 (HER2) status, hormone-receptor status, and risk according to a 70-gene profile. Neratinib was evaluated against control with regard to 10 biomarker signatures (prospectively defined combinations of subtypes). The primary end point was pathological complete response. Volume changes on serial magnetic resonance imaging were used to assess the likelihood of such a response in each patient. Adaptive assignment to experimental groups within each disease subtype was based on Bayesian probabilities of the superiority of the treatment over control. Enrollment in the experimental group was stopped when the 85% Bayesian predictive probability of success in a confirmatory phase 3 trial of neoadjuvant therapy reached a prespecified threshold for any biomarker signature ("graduation"). Enrollment was stopped for futility if the probability fell to below 10% for every biomarker signature. Neratinib reached the prespecified efficacy threshold with regard to the HER2-positive, hormone-receptor-negative signature. Among patients with HER2-positive, hormone-receptor-negative cancer, the mean estimated rate of pathological complete response was 56% (95% Bayesian probability interval [PI], 37 to 73%) among 115 patients in the neratinib group, as compared with 33% among 78 controls (95% PI, 11 to 54%). The final predictive probability of success in phase 3 testing was 79%. Neratinib added to standard therapy was highly likely to result in higher rates of pathological complete response than standard chemotherapy with trastuzumab among patients with HER2-positive, hormone-receptor-negative breast cancer. (Funded by QuantumLeap Healthcare Collaborative and others; I-SPY 2 TRIAL ClinicalTrials.gov number, NCT01042379.).

  2. Comparative efficacy and safety of tofacitinib, with or without methotrexate, in patients with active rheumatoid arthritis: a Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Lee, Young Ho; Bae, Sang-Cheol; Song, Gwan Gyu

    2015-12-01

    This study aimed to assess the relative efficacy and safety of tofacitinib 5 and 10 mg twice daily, or in combination with methotrexate (MTX), in patients with active RA. Randomized controlled trials (RCTs) examining the efficacy and safety of tofacitinib in patients with active RA were included in this network meta-analysis. We performed a Bayesian network meta-analysis to combine the direct and indirect evidence from the RCTs. Ten RCTs including 4867 patients met the inclusion criteria. There were 21 pairwise comparisons including 11 direct comparisons of seven interventions. The ACR20 response rate was significantly higher in the tofacitinib 10 mg + MTX group than in the placebo and MTX groups (OR 7.56, 95 % credible interval (CrI) 3.07-21.16; OR 3.67, 95 % CrI 2.60-5.71, respectively). Ranking probabilities based on the surface under the cumulative ranking curve (SUCRA) indicated that tofacitinib 10 mg + MTX had the highest probability of being the best treatment for achieving the ACR20 response rate (SUCRA = 0.9254), followed by tofacitinib 5 mg + MTX (SUCRA = 0.7156), adalimumab 40 mg + MTX (SUCRA = 0.6097), tofacitinib 10 mg (SUCRA = 0.5984), tofacitinib 5 mg (SUCRA = 0.4749), MTX (SUCRA = 0.1674), and placebo (SUCRA = 0.0086). In contrast, the safety based on the number of withdrawals due to adverse events did not differ significantly among the seven interventions. Tofacitinib, at dosages 5 and 10 mg twice daily, in combination with MTX, was the most efficacious intervention for active RA and was not associated with a significant risk for withdrawals due to adverse events.

  3. Single-Trial Event-Related Potential Correlates of Belief Updating

    PubMed Central

    Murawski, Carsten; Bode, Stefan

    2015-01-01

    Abstract Belief updating—the process by which an agent alters an internal model of its environment—is a core function of the CNS. Recent theory has proposed broad principles by which belief updating might operate, but more precise details of its implementation in the human brain remain unclear. In order to address this question, we studied how two components of the human event-related potential encoded different aspects of belief updating. Participants completed a novel perceptual learning task while electroencephalography was recorded. Participants learned the mapping between the contrast of a dynamic visual stimulus and a monetary reward and updated their beliefs about a target contrast on each trial. A Bayesian computational model was formulated to estimate belief states at each trial and was used to quantify the following two variables: belief update size and belief uncertainty. Robust single-trial regression was used to assess how these model-derived variables were related to the amplitudes of the P3 and the stimulus-preceding negativity (SPN), respectively. Results showed a positive relationship between belief update size and P3 amplitude at one fronto-central electrode, and a negative relationship between SPN amplitude and belief uncertainty at a left central and a right parietal electrode. These results provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components. This, in turn, provides evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework. PMID:26473170

  4. A Gibbs sampler for Bayesian analysis of site-occupancy data

    USGS Publications Warehouse

    Dorazio, Robert M.; Rodriguez, Daniel Taylor

    2012-01-01

    1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.

  5. Utility-based designs for randomized comparative trials with categorical outcomes

    PubMed Central

    Murray, Thomas A.; Thall, Peter F.; Yuan, Ying

    2016-01-01

    A general utility-based testing methodology for design and conduct of randomized comparative clinical trials with categorical outcomes is presented. Numerical utilities of all elementary events are elicited to quantify their desirabilities. These numerical values are used to map the categorical outcome probability vector of each treatment to a mean utility, which is used as a one-dimensional criterion for constructing comparative tests. Bayesian tests are presented, including fixed sample and group sequential procedures, assuming Dirichlet-multinomial models for the priors and likelihoods. Guidelines are provided for establishing priors, eliciting utilities, and specifying hypotheses. Efficient posterior computation is discussed, and algorithms are provided for jointly calibrating test cutoffs and sample size to control overall type I error and achieve specified power. Asymptotic approximations for the power curve are used to initialize the algorithms. The methodology is applied to re-design a completed trial that compared two chemotherapy regimens for chronic lymphocytic leukemia, in which an ordinal efficacy outcome was dichotomized and toxicity was ignored to construct the trial’s design. The Bayesian tests also are illustrated by several types of categorical outcomes arising in common clinical settings. Freely available computer software for implementation is provided. PMID:27189672

  6. A critique of statistical hypothesis testing in clinical research

    PubMed Central

    Raha, Somik

    2011-01-01

    Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152

  7. RECONSTRUCTING EXPOSURE SCENARIOS USING DOSE BIOMARKERS - AN APPLICATION OF BAYESIAN UNCERTAINTY ANALYSIS

    EPA Science Inventory

    We use Bayesian uncertainty analysis to explore how to estimate pollutant exposures from biomarker concentrations. The growing number of national databases with exposure data makes such an analysis possible. They contain datasets of pharmacokinetic biomarkers for many polluta...

  8. Comparative Effectiveness of 12 Treatment Strategies for Preventing Contrast-Induced Acute Kidney Injury: A Systematic Review and Bayesian Network Meta-analysis.

    PubMed

    Su, Xiaole; Xie, Xinfang; Liu, Lijun; Lv, Jicheng; Song, Fujian; Perkovic, Vlado; Zhang, Hong

    2017-01-01

    To simultaneously evaluate the relative efficacy of multiple pharmacologic strategies for preventing contrast-induced acute kidney injury (AKI). Systematic review containing a Bayesian network meta-analysis of randomized controlled trials. Participants undergoing diagnostic and/or interventional procedures with contrast media. Randomized controlled trials comparing the active drug treatments with each other or with hydration alone. Any of the following drugs in combination with hydration: N-acetylcysteine (NAC), theophylline (aminophylline), fenoldopam, iloprost, alprostadil, prostaglandin E 1 , statins, statins plus NAC, bicarbonate sodium, bicarbonate sodium plus NAC, ascorbic acid (vitamin C), tocopherol (vitamin E), α-lipoic acid, atrial natriuretic peptide, B-type natriuretic peptide, and carperitide. The occurrence of contrast-induced AKI. The trial network included 150 trials with 31,631 participants and 4,182 contrast-induced AKI events assessing 12 different interventions. Compared to hydration, ORs (95% credible intervals) for contrast-induced AKI were 0.31 (0.14-0.60) for high-dose statin plus NAC, 0.37 (0.19-0.64) for high-dose statin alone, 0.37 (0.17-0.72) for prostaglandins, 0.48 (0.26-0.82) for theophylline, 0.62 (0.40-0.88) for bicarbonate sodium plus NAC, 0.67 (0.54-0.81) for NAC alone, 0.64 (0.41-0.95) for vitamins and analogues, 0.70 (0.29-1.37) for natriuretic peptides, 0.69 (0.31-1.37) for fenoldopam, 0.78 (0.59-1.01) for bicarbonate sodium, and 0.98 (0.41-2.07) for low-dose statin. High-dose statin plus NAC or high-dose statin alone were likely to be ranked the best or the second best for preventing contrast-induced AKI. The overall results were not materially changed in metaregressions or subgroup and sensitivity analyses. Patient-level data were unavailable; unable to include some treatment agents; low event rates; imbalanced distribution of participants among treatment strategies. High-dose statins plus hydration with or without NAC might be the preferred treatment strategy to prevent contrast-induced AKI in patients undergoing diagnostic and/or interventional procedures requiring contrast media. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  9. Transcranial laser therapy in acute stroke treatment: results of neurothera effectiveness and safety trial 3, a phase III clinical end point device trial.

    PubMed

    Hacke, Werner; Schellinger, Peter D; Albers, Gregory W; Bornstein, Natan M; Dahlof, Bjorn L; Fulton, Rachael; Kasner, Scott E; Shuaib, Ashfaq; Richieri, Steven P; Dilly, Stephen G; Zivin, Justin; Lees, Kennedy R

    2014-11-01

    On the basis of phase II trials, we considered that transcranial laser therapy could have neuroprotective effects in patients with acute ischemic stroke. We studied transcranial laser therapy in a double-blind, sham-controlled randomized clinical trial intended to enroll 1000 patients with acute ischemic stroke treated ≤24 hours after stroke onset and who did not undergo thrombolytic therapy. The primary efficacy measure was the 90-day functional outcome as assessed by the modified Rankin Scale, with hierarchical Bayesian analysis incorporating relevant previous data. Interim analyses were planned after 300 and 600 patients included. The study was terminated on recommendation by the Data Monitoring Committee after a futility analysis of 566 completed patients found no difference in the primary end point (transcranial laser therapy 140/282 [49.6%] versus sham 140/284 [49.3%] for good functional outcome; modified Rankin Scale, 0-2). The results remained stable after inclusion of all 630 randomized patients (adjusted odds ratio, 1.024; 95% confidence interval, 0.705-1.488). Once the results of the interim futility analysis became available, all study support was immediately withdrawn by the capital firms behind PhotoThera, and the company was dissolved. Proper termination of the trial was difficult but was finally achieved through special efforts by former employees of PhotoThera, the CRO Parexel and members of the steering and the safety committees. We conclude that transcranial laser therapy does not have a measurable neuroprotective effect in patients with acute ischemic stroke when applied within 24 hours after stroke onset. http://www.clinicaltrials.gov. Unique identifier: NCT01120301. © 2014 American Heart Association, Inc.

  10. Trial-by-Trial Changes in a Priori Informational Value of External Cues and Subjective Expectancies in Human Auditory Attention

    PubMed Central

    Arjona, Antonio; Gómez, Carlos M.

    2011-01-01

    Background Preparatory activity based on a priori probabilities generated in previous trials and subjective expectancies would produce an attentional bias. However, preparation can be correct (valid) or incorrect (invalid) depending on the actual target stimulus. The alternation effect refers to the subjective expectancy that a target will not be repeated in the same position, causing RTs to increase if the target location is repeated. The present experiment, using the Posner's central cue paradigm, tries to demonstrate that not only the credibility of the cue, but also the expectancy about the next position of the target are changedin a trial by trial basis. Sequences of trials were analyzed. Results The results indicated an increase in RT benefits when sequences of two and three valid trials occurred. The analysis of errors indicated an increase in anticipatory behavior which grows as the number of valid trials is increased. On the other hand, there was also an RT benefit when a trial was preceded by trials in which the position of the target changed with respect to the current trial (alternation effect). Sequences of two alternations or two repetitions were faster than sequences of trials in which a pattern of repetition or alternation is broken. Conclusions Taken together, these results suggest that in Posner's central cue paradigm, and with regard to the anticipatory activity, the credibility of the external cue and of the endogenously anticipated patterns of target location are constantly updated. The results suggest that Bayesian rules are operating in the generation of anticipatory activity as a function of the previous trial's outcome, but also on biases or prior beliefs like the “gambler fallacy”. PMID:21698164

  11. Bayesian Factor Analysis as a Variable Selection Problem: Alternative Priors and Consequences

    PubMed Central

    Lu, Zhao-Hua; Chow, Sy-Miin; Loken, Eric

    2016-01-01

    Factor analysis is a popular statistical technique for multivariate data analysis. Developments in the structural equation modeling framework have enabled the use of hybrid confirmatory/exploratory approaches in which factor loading structures can be explored relatively flexibly within a confirmatory factor analysis (CFA) framework. Recently, a Bayesian structural equation modeling (BSEM) approach (Muthén & Asparouhov, 2012) has been proposed as a way to explore the presence of cross-loadings in CFA models. We show that the issue of determining factor loading patterns may be formulated as a Bayesian variable selection problem in which Muthén and Asparouhov’s approach can be regarded as a BSEM approach with ridge regression prior (BSEM-RP). We propose another Bayesian approach, denoted herein as the Bayesian structural equation modeling with spike and slab prior (BSEM-SSP), which serves as a one-stage alternative to the BSEM-RP. We review the theoretical advantages and disadvantages of both approaches and compare their empirical performance relative to two modification indices-based approaches and exploratory factor analysis with target rotation. A teacher stress scale data set (Byrne, 2012; Pettegrew & Wolf, 1982) is used to demonstrate our approach. PMID:27314566

  12. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    NASA Technical Reports Server (NTRS)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  13. Skin antiseptics in venous puncture site disinfection for preventing blood culture contamination: A Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Liu, Wenjie; Duan, Yuchen; Cui, Wenyao; Li, Li; Wang, Xia; Dai, Heling; You, Chao; Chen, Maojun

    2016-07-01

    To compare the efficacy of several antiseptics in decreasing the blood culture contamination rate. Network meta-analysis. Electronic searches of PubMed and Embase were conducted up to November 2015. Only randomized controlled trials or quasi-randomized controlled trials were eligible. We applied no language restriction. A comprehensive review of articles in the reference lists was also accomplished for possible relevant studies. Relevant studies evaluating efficacy of different antiseptics in venous puncture site for decreasing the blood culture contamination rate were included. The data were extracted from the included randomized controlled trials by two authors independently. The risk of bias was evaluated using Detsky scale by two authors independently. We used WinBUGS1.43 software and statistic model described by Chaimani to perform this network meta-analysis. Then graphs of statistical results of WinBUGS1.43 software were generated using 'networkplot', 'ifplot', 'netfunnel' and 'sucra' procedure by STATA13.0. Odds ratio and 95% confidence intervals were assessed for dichotomous data. A probability of p less than 0.05 was considered to be statistically significant. Compared with ordinary meta-analyses, this network meta-analysis offered hierarchies for the efficacy of different antiseptics in decreasing the blood culture contamination rate. Seven randomized controlled trials involving 34,408 blood samples were eligible for the meta-analysis. No significant difference was found in blood culture contamination rate among different antiseptics. No significant difference was found between non-alcoholic antiseptics and alcoholic antiseptics, alcoholic chlorhexidine and povidone iodine, chlorhexidine and iodine compounds, povidone iodine and iodine tincture in this aspect, respectively. Different antiseptics may not affect the blood culture contamination rate. Different intervals between the skin disinfection and the venous puncture, the different settings (emergency room, medical wards, and intensive care units) and the performance of the phlebotomy may affect the blood culture contamination rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Incorporating external evidence in trial-based cost-effectiveness analyses: the use of resampling methods

    PubMed Central

    2014-01-01

    Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356

  15. Incorporating external evidence in trial-based cost-effectiveness analyses: the use of resampling methods.

    PubMed

    Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling

    2014-06-03

    Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.

  16. Bayesian accrual prediction for interim review of clinical studies: open source R package and smartphone application.

    PubMed

    Jiang, Yu; Guarino, Peter; Ma, Shuangge; Simon, Steve; Mayo, Matthew S; Raghavan, Rama; Gajewski, Byron J

    2016-07-22

    Subject recruitment for medical research is challenging. Slow patient accrual leads to increased costs and delays in treatment advances. Researchers need reliable tools to manage and predict the accrual rate. The previously developed Bayesian method integrates researchers' experience on former trials and data from an ongoing study, providing a reliable prediction of accrual rate for clinical studies. In this paper, we present a user-friendly graphical user interface program developed in R. A closed-form solution for the total subjects that can be recruited within a fixed time is derived. We also present a built-in Android system using Java for web browsers and mobile devices. Using the accrual software, we re-evaluated the Veteran Affairs Cooperative Studies Program 558- ROBOTICS study. The application of the software in monitoring and management of recruitment is illustrated for different stages of the trial. This developed accrual software provides a more convenient platform for estimation and prediction of the accrual process.

  17. Single-Case Time Series with Bayesian Analysis: A Practitioner's Guide.

    ERIC Educational Resources Information Center

    Jones, W. Paul

    2003-01-01

    This article illustrates a simplified time series analysis for use by the counseling researcher practitioner in single-case baseline plus intervention studies with a Bayesian probability analysis to integrate findings from replications. The C statistic is recommended as a primary analysis tool with particular relevance in the context of actual…

  18. Daniel Goodman’s empirical approach to Bayesian statistics

    USGS Publications Warehouse

    Gerrodette, Tim; Ward, Eric; Taylor, Rebecca L.; Schwarz, Lisa K.; Eguchi, Tomoharu; Wade, Paul; Himes Boor, Gina

    2016-01-01

    Bayesian statistics, in contrast to classical statistics, uses probability to represent uncertainty about the state of knowledge. Bayesian statistics has often been associated with the idea that knowledge is subjective and that a probability distribution represents a personal degree of belief. Dr. Daniel Goodman considered this viewpoint problematic for issues of public policy. He sought to ground his Bayesian approach in data, and advocated the construction of a prior as an empirical histogram of “similar” cases. In this way, the posterior distribution that results from a Bayesian analysis combined comparable previous data with case-specific current data, using Bayes’ formula. Goodman championed such a data-based approach, but he acknowledged that it was difficult in practice. If based on a true representation of our knowledge and uncertainty, Goodman argued that risk assessment and decision-making could be an exact science, despite the uncertainties. In his view, Bayesian statistics is a critical component of this science because a Bayesian analysis produces the probabilities of future outcomes. Indeed, Goodman maintained that the Bayesian machinery, following the rules of conditional probability, offered the best legitimate inference from available data. We give an example of an informative prior in a recent study of Steller sea lion spatial use patterns in Alaska.

  19. Robust Bayesian Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Yuan, Ke-Hai

    2003-01-01

    Bayesian factor analysis (BFA) assumes the normal distribution of the current sample conditional on the parameters. Practical data in social and behavioral sciences typically have significant skewness and kurtosis. If the normality assumption is not attainable, the posterior analysis will be inaccurate, although the BFA depends less on the current…

  20. Bayesian Meta-Analysis of Coefficient Alpha

    ERIC Educational Resources Information Center

    Brannick, Michael T.; Zhang, Nanhua

    2013-01-01

    The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…

  1. Ultrathin, bioresorbable polymer sirolimus-eluting stents versus thin, durable polymer everolimus-eluting stents in patients undergoing coronary revascularisation (BIOFLOW V): a randomised trial.

    PubMed

    Kandzari, David E; Mauri, Laura; Koolen, Jacques J; Massaro, Joseph M; Doros, Gheorghe; Garcia-Garcia, Hector M; Bennett, Johan; Roguin, Ariel; Gharib, Elie G; Cutlip, Donald E; Waksman, Ron

    2017-10-21

    The development of coronary drug-eluting stents has included use of new metal alloys, changes in stent architecture, and use of bioresorbable polymers. Whether these advancements improve clinical safety and efficacy has not been shown in previous randomised trials. We aimed to examine the clinical outcomes of a bioresorbable polymer sirolimus-eluting stent compared with a durable polymer everolimus-eluting stent in a broad patient population undergoing percutaneous coronary intervention. BIOFLOW V was an international, randomised trial done in patients undergoing elective and urgent percutaneous coronary intervention in 90 hospitals in 13 countries (Australia, Belgium, Canada, Denmark, Germany, Hungary, Israel, the Netherlands, New Zealand, South Korea, Spain, Switzerland, and the USA). Eligible patients were those aged 18 years or older with ischaemic heart disease undergoing planned stent implantation in de-novo, native coronary lesions. Patients were randomly assigned (2:1) to either an ultrathin strut (60 μm) bioresorbable polymer sirolimus-eluting stent or to a durable polymer everolimus-eluting stent. Randomisation was via a central web-based data capture system (mixed blocks of 3 and 6), and stratified by study site. The primary endpoint was 12-month target lesion failure. The primary non-inferiority comparison combined these data from two additional randomised trials of bioresorbable polymer sirolimus-eluting stent and durable polymer everolimus-eluting stent with Bayesian methods. Analysis was by intention to treat. The trial is registered with ClinicalTrials.gov, number NCT02389946. Between May 8, 2015, and March 31, 2016, 4772 patients were recruited into the study. 1334 patients met inclusion criteria and were randomly assigned to treatment with bioresorbable polymer sirolimus-eluting stents (n=884) or durable polymer everolimus-eluting stents (n=450). 52 (6%) of 883 patients in the bioresorbable polymer sirolimus-eluting stent group and 41 (10%) of 427 patients in the durable polymer everolimus-eluting stent group met the 12-month primary endpoint of target lesion failure (95% CI -6·84 to -0·29, p=0·0399), with differences in target vessel myocardial infarction (39 [5%] of 831 patients vs 35 [8%] of 424 patients, p=0·0155). The posterior probability that the bioresorbable polymer sirolimus-eluting stent is non-inferior to the durable polymer everolimus-eluting stent was 100% (Bayesian analysis, difference in target lesion failure frequency -2·6% [95% credible interval -5·5 to 0·1], non-inferiority margin 3·85%, n=2208). The outperformance of the ultrathin, bioresorbable polymer sirolimus-eluting stent over the durable polymer everolimus-eluting stent in a complex patient population undergoing percutaneous coronary intervention suggests a new direction in improving next generation drug-eluting stent technology. BIOTRONIK. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes.

    PubMed

    Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel

    2011-05-23

    Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.

  3. Decision-theoretic designs for a series of trials with correlated treatment effects using the Sarmanov multivariate beta-binomial distribution.

    PubMed

    Hee, Siew Wan; Parsons, Nicholas; Stallard, Nigel

    2018-03-01

    The motivation for the work in this article is the setting in which a number of treatments are available for evaluation in phase II clinical trials and where it may be infeasible to try them concurrently because the intended population is small. This paper introduces an extension of previous work on decision-theoretic designs for a series of phase II trials. The program encompasses a series of sequential phase II trials with interim decision making and a single two-arm phase III trial. The design is based on a hybrid approach where the final analysis of the phase III data is based on a classical frequentist hypothesis test, whereas the trials are designed using a Bayesian decision-theoretic approach in which the unknown treatment effect is assumed to follow a known prior distribution. In addition, as treatments are intended for the same population it is not unrealistic to consider treatment effects to be correlated. Thus, the prior distribution will reflect this. Data from a randomized trial of severe arthritis of the hip are used to test the application of the design. We show that the design on average requires fewer patients in phase II than when the correlation is ignored. Correspondingly, the time required to recommend an efficacious treatment for phase III is quicker. © 2017 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  5. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  6. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation.

    PubMed

    Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L

    2016-02-10

    Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.

  7. The Importance of Proving the Null

    ERIC Educational Resources Information Center

    Gallistel, C. R.

    2009-01-01

    Null hypotheses are simple, precise, and theoretically important. Conventional statistical analysis cannot support them; Bayesian analysis can. The challenge in a Bayesian analysis is to formulate a suitably vague alternative, because the vaguer the alternative is (the more it spreads out the unit mass of prior probability), the more the null is…

  8. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

    PubMed

    Yang, Ziheng; Zhu, Tianqi

    2018-02-20

    The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

  9. Bayesian Analysis of Nonlinear Structural Equation Models with Nonignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum

    2006-01-01

    A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm is used to produce the joint Bayesian estimates of…

  10. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  11. Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations

    ERIC Educational Resources Information Center

    Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon

    2018-01-01

    To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…

  12. Bayesian statistics in radionuclide metrology: measurement of a decaying source

    NASA Astrophysics Data System (ADS)

    Bochud, François O.; Bailat, Claude J.; Laedermann, Jean-Pascal

    2007-08-01

    The most intuitive way of defining a probability is perhaps through the frequency at which it appears when a large number of trials are realized in identical conditions. The probability derived from the obtained histogram characterizes the so-called frequentist or conventional statistical approach. In this sense, probability is defined as a physical property of the observed system. By contrast, in Bayesian statistics, a probability is not a physical property or a directly observable quantity, but a degree of belief or an element of inference. The goal of this paper is to show how Bayesian statistics can be used in radionuclide metrology and what its advantages and disadvantages are compared with conventional statistics. This is performed through the example of an yttrium-90 source typically encountered in environmental surveillance measurement. Because of the very low activity of this kind of source and the small half-life of the radionuclide, this measurement takes several days, during which the source decays significantly. Several methods are proposed to compute simultaneously the number of unstable nuclei at a given reference time, the decay constant and the background. Asymptotically, all approaches give the same result. However, Bayesian statistics produces coherent estimates and confidence intervals in a much smaller number of measurements. Apart from the conceptual understanding of statistics, the main difficulty that could deter radionuclide metrologists from using Bayesian statistics is the complexity of the computation.

  13. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  14. Bayesian correction for covariate measurement error: A frequentist evaluation and comparison with regression calibration.

    PubMed

    Bartlett, Jonathan W; Keogh, Ruth H

    2018-06-01

    Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.

  15. Enhancing the Modeling of PFOA Pharmacokinetics with Bayesian Analysis

    EPA Science Inventory

    The detail sufficient to describe the pharmacokinetics (PK) for perfluorooctanoic acid (PFOA) and the methods necessary to combine information from multiple data sets are both subjects of ongoing investigation. Bayesian analysis provides tools to accommodate these goals. We exa...

  16. A Bayesian statistical analysis of mouse dermal tumor promotion assay data for evaluating cigarette smoke condensate.

    PubMed

    Kathman, Steven J; Potts, Ryan J; Ayres, Paul H; Harp, Paul R; Wilson, Cody L; Garner, Charles D

    2010-10-01

    The mouse dermal assay has long been used to assess the dermal tumorigenicity of cigarette smoke condensate (CSC). This mouse skin model has been developed for use in carcinogenicity testing utilizing the SENCAR mouse as the standard strain. Though the model has limitations, it remains as the most relevant method available to study the dermal tumor promoting potential of mainstream cigarette smoke. In the typical SENCAR mouse CSC bioassay, CSC is applied for 29 weeks following the application of a tumor initiator such as 7,12-dimethylbenz[a]anthracene (DMBA). Several endpoints are considered for analysis including: the percentage of animals with at least one mass, latency, and number of masses per animal. In this paper, a relatively straightforward analytic model and procedure is presented for analyzing the time course of the incidence of masses. The procedure considered here takes advantage of Bayesian statistical techniques, which provide powerful methods for model fitting and simulation. Two datasets are analyzed to illustrate how the model fits the data, how well the model may perform in predicting data from such trials, and how the model may be used as a decision tool when comparing the dermal tumorigenicity of cigarette smoke condensate from multiple cigarette types. The analysis presented here was developed as a statistical decision tool for differentiating between two or more prototype products based on the dermal tumorigenicity. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  17. Geographic analysis of shigellosis in Vietnam.

    PubMed

    Kim, Deok Ryun; Ali, Mohammad; Thiem, Vu Dinh; Park, Jin-Kyung; von Seidlein, Lorenz; Clemens, John

    2008-12-01

    Geographic and ecological analysis may provide investigators useful ecological information for the control of shigellosis. This paper provides distribution of individual Shigella species in space, and ecological covariates for shigellosis in Nha Trang, Vietnam. Data on shigellosis in neighborhoods were used to identify ecological covariates. A Bayesian hierarchical model was used to obtain joint posterior distribution of model parameters and to construct smoothed risk maps for shigellosis. Neighborhoods with a high proportion of worshippers of traditional religion, close proximity to hospital, or close proximity to the river had increased risk for shigellosis. The ecological covariates associated with Shigella flexneri differed from the covariates for Shigella sonnei. In contrast the spatial distribution of the two species was similar. The disease maps can help identify high-risk areas of shigellosis that can be targeted for interventions. This approach may be useful for the selection of populations and the analysis of vaccine trials.

  18. Bayesian statistics: estimating plant demographic parameters

    Treesearch

    James S. Clark; Michael Lavine

    2001-01-01

    There are times when external information should be brought tobear on an ecological analysis. experiments are never conducted in a knowledge-free context. The inference we draw from an observation may depend on everything else we know about the process. Bayesian analysis is a method that brings outside evidence into the analysis of experimental and observational data...

  19. Spatial Dependence and Heterogeneity in Bayesian Factor Analysis: A Cross-National Investigation of Schwartz Values

    ERIC Educational Resources Information Center

    Stakhovych, Stanislav; Bijmolt, Tammo H. A.; Wedel, Michel

    2012-01-01

    In this article, we present a Bayesian spatial factor analysis model. We extend previous work on confirmatory factor analysis by including geographically distributed latent variables and accounting for heterogeneity and spatial autocorrelation. The simulation study shows excellent recovery of the model parameters and demonstrates the consequences…

  20. Bayesian Structural Equation Modeling: A More Flexible Representation of Substantive Theory

    ERIC Educational Resources Information Center

    Muthen, Bengt; Asparouhov, Tihomir

    2012-01-01

    This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed…

  1. BCM: toolkit for Bayesian analysis of Computational Models using samplers.

    PubMed

    Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A

    2016-10-21

    Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.

  2. Exploration of computational methods for classification of movement intention during human voluntary movement from single trial EEG.

    PubMed

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark

    2007-12-01

    To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.

  3. Bayesian Analysis of Longitudinal Data Using Growth Curve Models

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.

    2007-01-01

    Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…

  4. Introduction to Bayesian statistical approaches to compositional analyses of transgenic crops 1. Model validation and setting the stage.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Harrigan, George G

    2011-08-01

    Statistical comparisons of compositional data generated on genetically modified (GM) crops and their near-isogenic conventional (non-GM) counterparts typically rely on classical significance testing. This manuscript presents an introduction to Bayesian methods for compositional analysis along with recommendations for model validation. The approach is illustrated using protein and fat data from two herbicide tolerant GM soybeans (MON87708 and MON87708×MON89788) and a conventional comparator grown in the US in 2008 and 2009. Guidelines recommended by the US Food and Drug Administration (FDA) in conducting Bayesian analyses of clinical studies on medical devices were followed. This study is the first Bayesian approach to GM and non-GM compositional comparisons. The evaluation presented here supports a conclusion that a Bayesian approach to analyzing compositional data can provide meaningful and interpretable results. We further describe the importance of method validation and approaches to model checking if Bayesian approaches to compositional data analysis are to be considered viable by scientists involved in GM research and regulation. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Bayesian analysis of rare events

    NASA Astrophysics Data System (ADS)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  6. A guide to Bayesian model selection for ecologists

    USGS Publications Warehouse

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  7. Bayesian Optimal Interval Design: A Simple and Well-Performing Design for Phase I Oncology Trials

    PubMed Central

    Yuan, Ying; Hess, Kenneth R.; Hilsenbeck, Susan G.; Gilbert, Mark R.

    2016-01-01

    Despite more than two decades of publications that offer more innovative model-based designs, the classical 3+3 design remains the most dominant phase I trial design in practice. In this article, we introduce a new trial design, the Bayesian optimal interval (BOIN) design. The BOIN design is easy to implement in a way similar to the 3+3 design, but is more flexible for choosing the target toxicity rate and cohort size and yields a substantially better performance that is comparable to that of more complex model-based designs. The BOIN design contains the 3+3 design and the accelerated titration design as special cases, thus linking it to established phase I approaches. A numerical study shows that the BOIN design generally outperforms the 3+3 design and the modified toxicity probability interval (mTPI) design. The BOIN design is more likely than the 3+3 design to correctly select the maximum tolerated dose (MTD) and allocate more patients to the MTD. Compared to the mTPI design, the BOIN design has a substantially lower risk of overdosing patients and generally a higher probability of correctly selecting the MTD. User-friendly software is freely available to facilitate the application of the BOIN design. PMID:27407096

  8. BATSE gamma-ray burst line search. 2: Bayesian consistency methodology

    NASA Technical Reports Server (NTRS)

    Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.

    1994-01-01

    We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.

  9. Fusion with Language Models Improves Spelling Accuracy for ERP-based Brain Computer Interface Spellers

    PubMed Central

    Orhan, Umut; Erdogmus, Deniz; Roark, Brian; Purwar, Shalini; Hild, Kenneth E.; Oken, Barry; Nezamfar, Hooman; Fried-Oken, Melanie

    2013-01-01

    Event related potentials (ERP) corresponding to a stimulus in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely utilized to build letter-by-letter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this paper, we study the effects of Bayesian fusion of an n-gram language model with a regularized discriminant analysis ERP detector for EEG-based BCIs. The letter classification accuracies are rigorously evaluated for varying language model orders as well as number of ERP-inducing trials. The results demonstrate that the language models contribute significantly to letter classification accuracy. Specifically, we find that a BCI-speller supported by a 4-gram language model may achieve the same performance using 3-trial ERP classification for the initial letters of the words and using single trial ERP classification for the subsequent ones. Overall, fusion of evidence from EEG and language models yields a significant opportunity to increase the word rate of a BCI based typing system. PMID:22255652

  10. Development of a clinical decision model for thyroid nodules.

    PubMed

    Stojadinovic, Alexander; Peoples, George E; Libutti, Steven K; Henry, Leonard R; Eberhardt, John; Howard, Robin S; Gur, David; Elster, Eric A; Nissan, Aviram

    2009-08-10

    Thyroid nodules represent a common problem brought to medical attention. Four to seven percent of the United States adult population (10-18 million people) has a palpable thyroid nodule, however the majority (>95%) of thyroid nodules are benign. While, fine needle aspiration remains the most cost effective and accurate diagnostic tool for thyroid nodules in current practice, over 20% of patients undergoing FNA of a thyroid nodule have indeterminate cytology (follicular neoplasm) with associated malignancy risk prevalence of 20-30%. These patients require thyroid lobectomy/isthmusectomy purely for the purpose of attaining a definitive diagnosis. Given that the majority (70-80%) of these patients have benign surgical pathology, thyroidectomy in these patients is conducted principally with diagnostic intent. Clinical models predictive of malignancy risk are needed to support treatment decisions in patients with thyroid nodules in order to reduce morbidity associated with unnecessary diagnostic surgery. Data were analyzed from a completed prospective cohort trial conducted over a 4-year period involving 216 patients with thyroid nodules undergoing ultrasound (US), electrical impedance scanning (EIS) and fine needle aspiration cytology (FNA) prior to thyroidectomy. A Bayesian model was designed to predict malignancy in thyroid nodules based on multivariate dependence relationships between independent covariates. Ten-fold cross-validation was performed to estimate classifier error wherein the data set was randomized into ten separate and unique train and test sets consisting of a training set (90% of records) and a test set (10% of records). A receiver-operating-characteristics (ROC) curve of these predictions and area under the curve (AUC) were calculated to determine model robustness for predicting malignancy in thyroid nodules. Thyroid nodule size, FNA cytology, US and EIS characteristics were highly predictive of malignancy. Cross validation of the model created with Bayesian Network Analysis effectively predicted malignancy [AUC = 0.88 (95%CI: 0.82-0.94)] in thyroid nodules. The positive and negative predictive values of the model are 83% (95%CI: 76%-91%) and 79% (95%CI: 72%-86%), respectively. An integrated predictive decision model using Bayesian inference incorporating readily obtainable thyroid nodule measures is clinically relevant, as it effectively predicts malignancy in thyroid nodules. This model warrants further validation testing in prospective clinical trials.

  11. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  12. A Bayesian model averaging approach with non-informative priors for cost-effectiveness analyses.

    PubMed

    Conigliani, Caterina

    2010-07-20

    We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavy-tailed distributions, so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging (BMA) in the particular case of weak prior informations about the unknown parameters of the different models involved in the procedure. The main consequence of this assumption is that the marginal densities required by BMA are undetermined. However, in accordance with the theory of partial Bayes factors and in particular of fractional Bayes factors, we suggest replacing each marginal density with a ratio of integrals that can be efficiently computed via path sampling. Copyright (c) 2010 John Wiley & Sons, Ltd.

  13. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

    PubMed

    Pandolfi, Maurizio; Carreras, Giulia

    2018-06-07

    It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

  14. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  15. Spatiotemporal Bayesian analysis of Lyme disease in New York state, 1990-2000.

    PubMed

    Chen, Haiyan; Stratton, Howard H; Caraco, Thomas B; White, Dennis J

    2006-07-01

    Mapping ordinarily increases our understanding of nontrivial spatial and temporal heterogeneities in disease rates. However, the large number of parameters required by the corresponding statistical models often complicates detailed analysis. This study investigates the feasibility of a fully Bayesian hierarchical regression approach to the problem and identifies how it outperforms two more popular methods: crude rate estimates (CRE) and empirical Bayes standardization (EBS). In particular, we apply a fully Bayesian approach to the spatiotemporal analysis of Lyme disease incidence in New York state for the period 1990-2000. These results are compared with those obtained by CRE and EBS in Chen et al. (2005). We show that the fully Bayesian regression model not only gives more reliable estimates of disease rates than the other two approaches but also allows for tractable models that can accommodate more numerous sources of variation and unknown parameters.

  16. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  17. Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Arav, Marina

    2006-01-01

    In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…

  18. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  19. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  20. Cortical Coupling Reflects Bayesian Belief Updating in the Deployment of Spatial Attention.

    PubMed

    Vossel, Simone; Mathys, Christoph; Stephan, Klaas E; Friston, Karl J

    2015-08-19

    The deployment of visuospatial attention and the programming of saccades are governed by the inferred likelihood of events. In the present study, we combined computational modeling of psychophysical data with fMRI to characterize the computational and neural mechanisms underlying this flexible attentional control. Sixteen healthy human subjects performed a modified version of Posner's location-cueing paradigm in which the percentage of cue validity varied in time and the targets required saccadic responses. Trialwise estimates of the certainty (precision) of the prediction that the target would appear at the cued location were derived from a hierarchical Bayesian model fitted to individual trialwise saccadic response speeds. Trial-specific model parameters then entered analyses of fMRI data as parametric regressors. Moreover, dynamic causal modeling (DCM) was performed to identify the most likely functional architecture of the attentional reorienting network and its modulation by (Bayes-optimal) precision-dependent attention. While the frontal eye fields (FEFs), intraparietal sulcus, and temporoparietal junction (TPJ) of both hemispheres showed higher activity on invalid relative to valid trials, reorienting responses in right FEF, TPJ, and the putamen were significantly modulated by precision-dependent attention. Our DCM results suggested that the precision of predictability underlies the attentional modulation of the coupling of TPJ with FEF and the putamen. Our results shed new light on the computational architecture and neuronal network dynamics underlying the context-sensitive deployment of visuospatial attention. Spatial attention and its neural correlates in the human brain have been studied extensively with the help of fMRI and cueing paradigms in which the location of targets is pre-cued on a trial-by-trial basis. One aspect that has so far been neglected concerns the question of how the brain forms attentional expectancies when no a priori probability information is available but needs to be inferred from observations. This study elucidates the computational and neural mechanisms under which probabilistic inference governs attentional deployment. Our results show that Bayesian belief updating explains changes in cortical connectivity; in that directional influences from the temporoparietal junction on the frontal eye fields and the putamen were modulated by (Bayes-optimal) updates. Copyright © 2015 Vossel et al.

  1. Bayesian dose selection design for a binary outcome using restricted response adaptive randomization.

    PubMed

    Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I

    2017-09-08

    In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy within the context of limited resources. While the design is general enough to apply to many situations, future work is needed to address interim analyses and the incorporation of models for dose response.

  2. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  3. Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.

    PubMed

    Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.

  4. Bayesian Exploratory Factor Analysis

    PubMed Central

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  5. A Bayesian cost-effectiveness analysis of a telemedicine-based strategy for the management of sleep apnoea: a multicentre randomised controlled trial.

    PubMed

    Isetta, Valentina; Negrín, Miguel A; Monasterio, Carmen; Masa, Juan F; Feu, Nuria; Álvarez, Ainhoa; Campos-Rodriguez, Francisco; Ruiz, Concepción; Abad, Jorge; Vázquez-Polo, Francisco J; Farré, Ramon; Galdeano, Marina; Lloberes, Patricia; Embid, Cristina; de la Peña, Mónica; Puertas, Javier; Dalmases, Mireia; Salord, Neus; Corral, Jaime; Jurado, Bernabé; León, Carmen; Egea, Carlos; Muñoz, Aida; Parra, Olga; Cambrodi, Roser; Martel-Escobar, María; Arqué, Meritxell; Montserrat, Josep M

    2015-11-01

    Compliance with continuous positive airway pressure (CPAP) therapy is essential in patients with obstructive sleep apnoea (OSA), but adequate control is not always possible. This is clinically important because CPAP can reverse the morbidity and mortality associated with OSA. Telemedicine, with support provided via a web platform and video conferences, could represent a cost-effective alternative to standard care management. To assess the telemedicine impact on treatment compliance, cost-effectiveness and improvement in quality of life (QoL) when compared with traditional face-to-face follow-up. A randomised controlled trial was performed to compare a telemedicine-based CPAP follow-up strategy with standard face-to-face management. Consecutive OSA patients requiring CPAP treatment, with sufficient internet skills and who agreed to participate, were enrolled. They were followed-up at 1, 3 and 6 months and answered surveys about sleep, CPAP side effects and lifestyle. We compared CPAP compliance, cost-effectiveness and QoL between the beginning and the end of the study. A Bayesian cost-effectiveness analysis with non-informative priors was performed. We randomised 139 patients. At 6 months, we found similar levels of CPAP compliance, and improved daytime sleepiness, QoL, side effects and degree of satisfaction in both groups. Despite requiring more visits, the telemedicine group was more cost-effective: costs were lower and differences in effectiveness were not relevant. A telemedicine-based strategy for the follow-up of CPAP treatment in patients with OSA was as effective as standard hospital-based care in terms of CPAP compliance and symptom improvement, with comparable side effects and satisfaction rates. The telemedicine-based strategy had lower total costs due to savings on transport and less lost productivity (indirect costs). NCT01716676. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. 2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT

    NASA Astrophysics Data System (ADS)

    Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.

    2018-01-01

    We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.

  7. Dietary supplements and risk of cause-specific death, cardiovascular disease, and cancer: a protocol for a systematic review and network meta-analysis of primary prevention trials.

    PubMed

    Schwingshackl, Lukas; Hoffmann, Georg; Buijsse, Brian; Mittag, Tamara; Stelmach-Mardas, Marta; Boeing, Heiner; Gottschald, Marion; Dietrich, Stefan; Arregui, Maria; Dias, Sofia

    2015-03-26

    In the Western world, dietary supplements are commonly used to prevent chronic diseases, mainly cardiovascular disease and cancer. However, there is inconsistent evidence on which dietary supplements actually lower risk of chronic disease, and some may even increase risk. We aim to evaluate the comparative safety and/or effectiveness of dietary supplements for the prevention of mortality (all-cause, cardiovascular, and cancer) and cardiovascular and cancer incidence in primary prevention trials. We will search PubMed, EMBASE, Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects, the Cochrane Central Register of Controlled Trials, clinical trials.gov, and the World Health Organization International Trial Registry Platform. Randomized controlled trials will be included if they meet the following criteria: (1) minimum intervention period of 12 months; (2) primary prevention of chronic disease (is concerned with preventing the onset of diseases and conditions); (3) minimum mean age ≥18 years (maximum mean age 70 years); (4) intervention(s) include vitamins (beta-carotene, vitamin A, B vitamins, Vitamin C, Vitamin D, Vitamin E, and multivitamin supplements); fatty acids (omega-3 fatty acids, omega-6 fatty acids, monounsaturated fat); minerals (magnesium, calcium, selenium, potassium, iron, zinc, copper, iodine; multiminerals); supplements containing combinations of both vitamins and minerals; protein (amino acids); fiber; prebiotics; probiotics; synbiotics; (5) supplements are orally administered as liquids, pills, capsules, tablets, drops, ampoules, or powder; (6) report results on all-cause mortality (primary outcome) and/or mortality from cardiovascular disease or cancer, cardiovascular and/or cancer incidence (secondary outcomes). Pooled effects across studies will be calculated using Bayesian random effects network meta-analysis. Sensitivity analysis will be performed for trials lasting ≥5 years, trials with low risk of bias, trials in elderly people (≥65 years), ethnicity, geographical region, and trials in men and women. The results of the corresponding fixed effects models will also be compared in sensitivity analyses. This is a presentation of the study protocol only. Results and conclusions are pending completion of this study. Our systematic review will be of great value to consumers of supplements, healthcare providers, and policy-makers, regarding the use of dietary supplements. CRD42014014801 .

  8. The Validity of Left Ventricular Mass as a Surrogate End Point for All-Cause and Cardiovascular Mortality Outcomes in People With CKD: A Systematic Review and Meta-analysis.

    PubMed

    Badve, Sunil V; Palmer, Suetonia C; Strippoli, Giovanni F M; Roberts, Matthew A; Teixeira-Pinto, Armando; Boudville, Neil; Cass, Alan; Hawley, Carmel M; Hiremath, Swapnil S; Pascoe, Elaine M; Perkovic, Vlado; Whalley, Gillian A; Craig, Jonathan C; Johnson, David W

    2016-10-01

    Left ventricular mass (LVM) is a widely used surrogate end point in randomized trials involving people with chronic kidney disease (CKD) because treatment-induced LVM reductions are assumed to lower cardiovascular risk. The aim of this study was to assess the validity of LVM as a surrogate end point for all-cause and cardiovascular mortality in CKD. Systematic review and meta-analysis. Participants with any stages of CKD. Randomized controlled trials with 3 or more months' follow-up that reported LVM data. Any pharmacologic or nonpharmacologic intervention. The surrogate outcome of interest was LVM change from baseline to last measurement, and clinical outcomes of interest were all-cause and cardiovascular mortality. Standardized mean differences (SMDs) of LVM change and relative risk for mortality were estimated using pairwise random-effects meta-analysis. Correlations between surrogate and clinical outcomes were summarized across all interventions combined using bivariate random-effects Bayesian models, and 95% credible intervals were computed. 73 trials (6,732 participants) covering 25 intervention classes were included in the meta-analysis. Overall, risk of bias was uncertain or high. Only 3 interventions reduced LVM: erythropoiesis-stimulating agents (9 trials; SMD, -0.13; 95% CI, -0.23 to -0.03), renin-angiotensin-aldosterone system inhibitors (13 trials; SMD, -0.28; 95% CI, -0.45 to -0.12), and isosorbide mononitrate (2 trials; SMD, -0.43; 95% CI, -0.72 to -0.14). All interventions had uncertain effects on all-cause and cardiovascular mortality. There were weak and imprecise associations between the effects of interventions on LVM change and all-cause (32 trials; 5,044 participants; correlation coefficient, 0.28; 95% credible interval, -0.13 to 0.59) and cardiovascular mortality (13 trials; 2,327 participants; correlation coefficient, 0.30; 95% credible interval, -0.54 to 0.76). Limited long-term data, suboptimal quality of included studies. There was no clear and consistent association between intervention-induced LVM change and mortality. Evidence for LVM as a valid surrogate end point in CKD is currently lacking. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  9. Applying Bayesian Modeling and Receiver Operating Characteristic Methodologies for Test Utility Analysis

    ERIC Educational Resources Information Center

    Wang, Qiu; Diemer, Matthew A.; Maier, Kimberly S.

    2013-01-01

    This study integrated Bayesian hierarchical modeling and receiver operating characteristic analysis (BROCA) to evaluate how interest strength (IS) and interest differentiation (ID) predicted low–socioeconomic status (SES) youth's interest-major congruence (IMC). Using large-scale Kuder Career Search online-assessment data, this study fit three…

  10. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  11. Monte Carlo Algorithms for a Bayesian Analysis of the Cosmic Microwave Background

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; ODwyer, I. J.; Wandelt, B. D.; Gorski, K.; Knox, L.; Chu, M.

    2006-01-01

    A viewgraph presentation on the review of Bayesian approach to Cosmic Microwave Background (CMB) analysis, numerical implementation with Gibbs sampling, a summary of application to WMAP I and work in progress with generalizations to polarization, foregrounds, asymmetric beams, and 1/f noise is given.

  12. Bayesian analysis of rare events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into themore » probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.« less

  13. Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T

    2016-12-20

    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  15. Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-10-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Improved Inference in Bayesian Segmentation Using Monte Carlo Sampling: Application to Hippocampal Subfield Volumetry

    PubMed Central

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van

    2013-01-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521

  17. Age estimation by assessment of pulp chamber volume: a Bayesian network for the evaluation of dental evidence.

    PubMed

    Sironi, Emanuele; Taroni, Franco; Baldinotti, Claudio; Nardi, Cosimo; Norelli, Gian-Aristide; Gallidabino, Matteo; Pinchi, Vilma

    2017-11-14

    The present study aimed to investigate the performance of a Bayesian method in the evaluation of dental age-related evidence collected by means of a geometrical approximation procedure of the pulp chamber volume. Measurement of this volume was based on three-dimensional cone beam computed tomography images. The Bayesian method was applied by means of a probabilistic graphical model, namely a Bayesian network. Performance of that method was investigated in terms of accuracy and bias of the decisional outcomes. Influence of an informed elicitation of the prior belief of chronological age was also studied by means of a sensitivity analysis. Outcomes in terms of accuracy were adequate with standard requirements for forensic adult age estimation. Findings also indicated that the Bayesian method does not show a particular tendency towards under- or overestimation of the age variable. Outcomes of the sensitivity analysis showed that results on estimation are improved with a ration elicitation of the prior probabilities of age.

  18. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes

    PubMed Central

    2011-01-01

    Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357

  19. Bayesian additive decision trees of biomarker by treatment interactions for predictive biomarker detection and subgroup identification.

    PubMed

    Zhao, Yang; Zheng, Wei; Zhuo, Daisy Y; Lu, Yuefeng; Ma, Xiwen; Liu, Hengchang; Zeng, Zhen; Laird, Glen

    2017-10-11

    Personalized medicine, or tailored therapy, has been an active and important topic in recent medical research. Many methods have been proposed in the literature for predictive biomarker detection and subgroup identification. In this article, we propose a novel decision tree-based approach applicable in randomized clinical trials. We model the prognostic effects of the biomarkers using additive regression trees and the biomarker-by-treatment effect using a single regression tree. Bayesian approach is utilized to periodically revise the split variables and the split rules of the decision trees, which provides a better overall fitting. Gibbs sampler is implemented in the MCMC procedure, which updates the prognostic trees and the interaction tree separately. We use the posterior distribution of the interaction tree to construct the predictive scores of the biomarkers and to identify the subgroup where the treatment is superior to the control. Numerical simulations show that our proposed method performs well under various settings comparing to existing methods. We also demonstrate an application of our method in a real clinical trial.

  20. Next Steps in Bayesian Structural Equation Models: Comments on, Variations of, and Extensions to Muthen and Asparouhov (2012)

    ERIC Educational Resources Information Center

    Rindskopf, David

    2012-01-01

    Muthen and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By…

  1. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  2. Bayesian Latent Class Analysis Tutorial.

    PubMed

    Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca

    2018-01-01

    This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.

  3. Perceptual learning through optimization of attentional weighting: human versus optimal Bayesian learner

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.

    2004-01-01

    Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (<1 min) but that human learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.

  4. Multivariate meta-analysis using individual participant data

    PubMed Central

    Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.

    2016-01-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484

  5. Bayesian multimodel inference for dose-response studies

    USGS Publications Warehouse

    Link, W.A.; Albers, P.H.

    2007-01-01

    Statistical inference in dose?response studies is model-based: The analyst posits a mathematical model of the relation between exposure and response, estimates parameters of the model, and reports conclusions conditional on the model. Such analyses rarely include any accounting for the uncertainties associated with model selection. The Bayesian inferential system provides a convenient framework for model selection and multimodel inference. In this paper we briefly describe the Bayesian paradigm and Bayesian multimodel inference. We then present a family of models for multinomial dose?response data and apply Bayesian multimodel inferential methods to the analysis of data on the reproductive success of American kestrels (Falco sparveriuss) exposed to various sublethal dietary concentrations of methylmercury.

  6. Development of a practical approach to expert elicitation for randomised controlled trials with missing health outcomes: Application to the IMPROVE trial.

    PubMed

    Mason, Alexina J; Gomes, Manuel; Grieve, Richard; Ulug, Pinar; Powell, Janet T; Carpenter, James

    2017-08-01

    The analyses of randomised controlled trials with missing data typically assume that, after conditioning on the observed data, the probability of missing data does not depend on the patient's outcome, and so the data are 'missing at random' . This assumption is usually implausible, for example, because patients in relatively poor health may be more likely to drop out. Methodological guidelines recommend that trials require sensitivity analysis, which is best informed by elicited expert opinion, to assess whether conclusions are robust to alternative assumptions about the missing data. A major barrier to implementing these methods in practice is the lack of relevant practical tools for eliciting expert opinion. We develop a new practical tool for eliciting expert opinion and demonstrate its use for randomised controlled trials with missing data. We develop and illustrate our approach for eliciting expert opinion with the IMPROVE trial (ISRCTN 48334791), an ongoing multi-centre randomised controlled trial which compares an emergency endovascular strategy versus open repair for patients with ruptured abdominal aortic aneurysm. In the IMPROVE trial at 3 months post-randomisation, 21% of surviving patients did not complete health-related quality of life questionnaires (assessed by EQ-5D-3L). We address this problem by developing a web-based tool that provides a practical approach for eliciting expert opinion about quality of life differences between patients with missing versus complete data. We show how this expert opinion can define informative priors within a fully Bayesian framework to perform sensitivity analyses that allow the missing data to depend upon unobserved patient characteristics. A total of 26 experts, of 46 asked to participate, completed the elicitation exercise. The elicited quality of life scores were lower on average for the patients with missing versus complete data, but there was considerable uncertainty in these elicited values. The missing at random analysis found that patients randomised to the emergency endovascular strategy versus open repair had higher average (95% credible interval) quality of life scores of 0.062 (-0.005 to 0.130). Our sensitivity analysis that used the elicited expert information as pooled priors found that the gain in average quality of life for the emergency endovascular strategy versus open repair was 0.076 (-0.054 to 0.198). We provide and exemplify a practical tool for eliciting the expert opinion required by recommended approaches to the sensitivity analyses of randomised controlled trials. We show how this approach allows the trial analysis to fully recognise the uncertainty that arises from making alternative, plausible assumptions about the reasons for missing data. This tool can be widely used in the design, analysis and interpretation of future trials, and to facilitate this, materials are available for download.

  7. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  8. Bayesian inference for psychology. Part II: Example applications with JASP.

    PubMed

    Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D

    2018-02-01

    Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

  9. Applying Bayesian statistics to the study of psychological trauma: A suggestion for future research.

    PubMed

    Yalch, Matthew M

    2016-03-01

    Several contemporary researchers have noted the virtues of Bayesian methods of data analysis. Although debates continue about whether conventional or Bayesian statistics is the "better" approach for researchers in general, there are reasons why Bayesian methods may be well suited to the study of psychological trauma in particular. This article describes how Bayesian statistics offers practical solutions to the problems of data non-normality, small sample size, and missing data common in research on psychological trauma. After a discussion of these problems and the effects they have on trauma research, this article explains the basic philosophical and statistical foundations of Bayesian statistics and how it provides solutions to these problems using an applied example. Results of the literature review and the accompanying example indicates the utility of Bayesian statistics in addressing problems common in trauma research. Bayesian statistics provides a set of methodological tools and a broader philosophical framework that is useful for trauma researchers. Methodological resources are also provided so that interested readers can learn more. (c) 2016 APA, all rights reserved).

  10. Bayesian Network Meta-Analysis for Unordered Categorical Outcomes with Incomplete Data

    ERIC Educational Resources Information Center

    Schmid, Christopher H.; Trikalinos, Thomas A.; Olkin, Ingram

    2014-01-01

    We develop a Bayesian multinomial network meta-analysis model for unordered (nominal) categorical outcomes that allows for partially observed data in which exact event counts may not be known for each category. This model properly accounts for correlations of counts in mutually exclusive categories and enables proper comparison and ranking of…

  11. A Comparison of Imputation Methods for Bayesian Factor Analysis Models

    ERIC Educational Resources Information Center

    Merkle, Edgar C.

    2011-01-01

    Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…

  12. Modeling Associations among Multivariate Longitudinal Categorical Variables in Survey Data: A Semiparametric Bayesian Approach

    ERIC Educational Resources Information Center

    Tchumtchoua, Sylvie; Dey, Dipak K.

    2012-01-01

    This paper proposes a semiparametric Bayesian framework for the analysis of associations among multivariate longitudinal categorical variables in high-dimensional data settings. This type of data is frequent, especially in the social and behavioral sciences. A semiparametric hierarchical factor analysis model is developed in which the…

  13. Bayesian Meta-Analysis of Cronbach's Coefficient Alpha to Evaluate Informative Hypotheses

    ERIC Educational Resources Information Center

    Okada, Kensuke

    2015-01-01

    This paper proposes a new method to evaluate informative hypotheses for meta-analysis of Cronbach's coefficient alpha using a Bayesian approach. The coefficient alpha is one of the most widely used reliability indices. In meta-analyses of reliability, researchers typically form specific informative hypotheses beforehand, such as "alpha of…

  14. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2016-01-01

    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  15. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  16. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  17. Bayesian conditional-independence modeling of the AIDS epidemic in England and Wales

    NASA Astrophysics Data System (ADS)

    Gilks, Walter R.; De Angelis, Daniela; Day, Nicholas E.

    We describe the use of conditional-independence modeling, Bayesian inference and Markov chain Monte Carlo, to model and project the HIV-AIDS epidemic in homosexual/bisexual males in England and Wales. Complexity in this analysis arises through selectively missing data, indirectly observed underlying processes, and measurement error. Our emphasis is on presentation and discussion of the concepts, not on the technicalities of this analysis, which can be found elsewhere [D. De Angelis, W.R. Gilks, N.E. Day, Bayesian projection of the the acquired immune deficiency syndrome epidemic (with discussion), Applied Statistics, in press].

  18. The Topp-Leone generalized Rayleigh cure rate model and its application

    NASA Astrophysics Data System (ADS)

    Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena

    2017-11-01

    Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.

  19. Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Burn, Donald H.; Concepción Ausín, María.; Wiper, Michael P.

    2016-03-01

    A time-varying risk analysis is proposed for an adaptive design framework in nonstationary conditions arising from climate change. A Bayesian, dynamic conditional copula is developed for modeling the time-varying dependence structure between mixed continuous and discrete multiattributes of multidimensional hydrometeorological phenomena. Joint Bayesian inference is carried out to fit the marginals and copula in an illustrative example using an adaptive, Gibbs Markov Chain Monte Carlo (MCMC) sampler. Posterior mean estimates and credible intervals are provided for the model parameters and the Deviance Information Criterion (DIC) is used to select the model that best captures different forms of nonstationarity over time. This study also introduces a fully Bayesian, time-varying joint return period for multivariate time-dependent risk analysis in nonstationary environments. The results demonstrate that the nature and the risk of extreme-climate multidimensional processes are changed over time under the impact of climate change, and accordingly the long-term decision making strategies should be updated based on the anomalies of the nonstationary environment.

  20. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  1. Bayesian model reduction and empirical Bayes for group (DCM) studies

    PubMed Central

    Friston, Karl J.; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E.; van Wijk, Bernadette C.M.; Ziegler, Gabriel; Zeidman, Peter

    2016-01-01

    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level – e.g., dynamic causal models – and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. PMID:26569570

  2. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    PubMed

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  3. A Pragmatic Bayesian Perspective on Correlation Analysis. The exoplanetary gravity - stellar activity case

    NASA Astrophysics Data System (ADS)

    Figueira, P.; Faria, J. P.; Adibekyan, V. Zh.; Oshagh, M.; Santos, N. C.

    2016-11-01

    We apply the Bayesian framework to assess the presence of a correlation between two quantities. To do so, we estimate the probability distribution of the parameter of interest, ρ, characterizing the strength of the correlation. We provide an implementation of these ideas and concepts using python programming language and the pyMC module in a very short (˜ 130 lines of code, heavily commented) and user-friendly program. We used this tool to assess the presence and properties of the correlation between planetary surface gravity and stellar activity level as measured by the log(R^' }_{ {HK}}) indicator. The results of the Bayesian analysis are qualitatively similar to those obtained via p-value analysis, and support the presence of a correlation in the data. The results are more robust in their derivation and more informative, revealing interesting features such as asymmetric posterior distributions or markedly different credible intervals, and allowing for a deeper exploration. We encourage the reader interested in this kind of problem to apply our code to his/her own scientific problems. The full understanding of what the Bayesian framework is can only be gained through the insight that comes by handling priors, assessing the convergence of Monte Carlo runs, and a multitude of other practical problems. We hope to contribute so that Bayesian analysis becomes a tool in the toolkit of researchers, and they understand by experience its advantages and limitations.

  4. Bayesian analysis of time-series data under case-crossover designs: posterior equivalence and inference.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay

    2013-12-01

    Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.

  5. Bayesian Optimal Interval Design: A Simple and Well-Performing Design for Phase I Oncology Trials.

    PubMed

    Yuan, Ying; Hess, Kenneth R; Hilsenbeck, Susan G; Gilbert, Mark R

    2016-09-01

    Despite more than two decades of publications that offer more innovative model-based designs, the classical 3 + 3 design remains the most dominant phase I trial design in practice. In this article, we introduce a new trial design, the Bayesian optimal interval (BOIN) design. The BOIN design is easy to implement in a way similar to the 3 + 3 design, but is more flexible for choosing the target toxicity rate and cohort size and yields a substantially better performance that is comparable with that of more complex model-based designs. The BOIN design contains the 3 + 3 design and the accelerated titration design as special cases, thus linking it to established phase I approaches. A numerical study shows that the BOIN design generally outperforms the 3 + 3 design and the modified toxicity probability interval (mTPI) design. The BOIN design is more likely than the 3 + 3 design to correctly select the MTD and allocate more patients to the MTD. Compared with the mTPI design, the BOIN design has a substantially lower risk of overdosing patients and generally a higher probability of correctly selecting the MTD. User-friendly software is freely available to facilitate the application of the BOIN design. Clin Cancer Res; 22(17); 4291-301. ©2016 AACR. ©2016 American Association for Cancer Research.

  6. Using short-term evidence to predict six-month outcomes in clinical trials of signs and symptoms in rheumatoid arthritis.

    PubMed

    Nixon, Richard M; Bansback, Nick; Stevens, John W; Brennan, Alan; Madan, Jason

    2009-01-01

    A model is presented to generate a distribution for the probability of an ACR response at six months for a new treatment for rheumatoid arthritis given evidence from a one- or three-month clinical trial. The model is based on published evidence from 11 randomized controlled trials on existing treatments. A hierarchical logistic regression model is used to find the relationship between the proportion of patients achieving ACR20 and ACR50 at one and three months and the proportion at six months. The model is assessed by Bayesian predictive P-values that demonstrate that the model fits the data well. The model can be used to predict the number of patients with an ACR response for proposed six-month clinical trials given data from clinical trials of one or three months duration. Copyright 2008 John Wiley & Sons, Ltd.

  7. The Bayesian approach to reporting GSR analysis results: some first-hand experiences

    NASA Astrophysics Data System (ADS)

    Charles, Sebastien; Nys, Bart

    2010-06-01

    The use of Bayesian principles in the reporting of forensic findings has been a matter of interest for some years. Recently, also the GSR community is gradually exploring the advantages of this method, or rather approach, for writing reports. Since last year, our GSR group is adapting reporting procedures to the use of Bayesian principles. The police and magistrates find the reports more directly accessible and useful in their part of the criminal investigation. In the lab we find that, through applying the Bayesian principles, unnecessary analyses can be eliminated and thus time can be freed on the instruments.

  8. Bayesian Analysis of Multidimensional Item Response Theory Models: A Discussion and Illustration of Three Response Style Models

    ERIC Educational Resources Information Center

    Leventhal, Brian C.; Stone, Clement A.

    2018-01-01

    Interest in Bayesian analysis of item response theory (IRT) models has grown tremendously due to the appeal of the paradigm among psychometricians, advantages of these methods when analyzing complex models, and availability of general-purpose software. Possible models include models which reflect multidimensionality due to designed test structure,…

  9. Symptoms of Depression and Challenging Behaviours in People with Intellectual Disability: A Bayesian Analysis. Brief Report

    ERIC Educational Resources Information Center

    Tsiouris, John; Mann, Rachel; Patti, Paul; Sturmey, Peter

    2004-01-01

    Clinicians need to know the likelihood of a condition given a positive or negative diagnostic test. In this study a Bayesian analysis of the Clinical Behavior Checklist for Persons with Intellectual Disabilities (CBCPID) to predict depression in people with intellectual disability was conducted. The CBCPID was administered to 92 adults with…

  10. Bayesian analysis of heterogeneous treatment effects for patient-centered outcomes research.

    PubMed

    Henderson, Nicholas C; Louis, Thomas A; Wang, Chenguang; Varadhan, Ravi

    2016-01-01

    Evaluation of heterogeneity of treatment effect (HTE) is an essential aspect of personalized medicine and patient-centered outcomes research. Our goal in this article is to promote the use of Bayesian methods for subgroup analysis and to lower the barriers to their implementation by describing the ways in which the companion software beanz can facilitate these types of analyses. To advance this goal, we describe several key Bayesian models for investigating HTE and outline the ways in which they are well-suited to address many of the commonly cited challenges in the study of HTE. Topics highlighted include shrinkage estimation, model choice, sensitivity analysis, and posterior predictive checking. A case study is presented in which we demonstrate the use of the methods discussed.

  11. Enhancements of Bayesian Blocks; Application to Large Light Curve Databases

    NASA Technical Reports Server (NTRS)

    Scargle, Jeff

    2015-01-01

    Bayesian Blocks are optimal piecewise linear representations (step function fits) of light-curves. The simple algorithm implementing this idea, using dynamic programming, has been extended to include more data modes and fitness metrics, multivariate analysis, and data on the circle (Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations, Scargle, Norris, Jackson and Chiang 2013, ApJ, 764, 167), as well as new results on background subtraction and refinement of the procedure for precise timing of transient events in sparse data. Example demonstrations will include exploratory analysis of the Kepler light curve archive in a search for "star-tickling" signals from extraterrestrial civilizations. (The Cepheid Galactic Internet, Learned, Kudritzki, Pakvasa1, and Zee, 2008, arXiv: 0809.0339; Walkowicz et al., in progress).

  12. A meta-analysis of comparative outcomes following cervical arthroplasty or anterior cervical fusion: results from 4 prospective multicenter randomized clinical trials and up to 1226 patients.

    PubMed

    McAfee, Paul C; Reah, Chris; Gilder, Kye; Eisermann, Lukas; Cunningham, Bryan

    2012-05-15

    Meta-analysis of 4 prospective randomized controlled Food and Drug Administration (FDA) Investigational Device Exemption (IDE) clinical trials. To maximize the information available from 4 IDE studies by analyzing the combined outcomes of cervical arthroplasty versus fusion at 24-month follow-up. To date, 4 randomized clinical trials have been completed in the United States under FDA IDE protocols to study cervical arthroplasty. Each trial reported arthroplasty to be at least as successful as fusion controls based on noninferiority trial designs. However, sample sizes in any given trial may not be sufficient to demonstrate superiority of treatment effect. Meta-analysis enables pooling of results from comparable trials, which may lead to more precise and statistically significant estimates of treatment effect. Four cervical arthroplasty randomized clinical trials with comparable enrollment criteria and outcome measures were conducted independently by 3 separate sponsors to study the following devices: Bryan, Prestige, ProDisc-C, and PCM cervical disc replacements. A total of 1608 patients were treated across 98 investigative sites. Data were available for 1352 treated patients, of which 1226 were evaluable at 24 months. Assessments included clinical success definitions based on neck disability index, maintenance or improvement of neurological status, subsequent surgery or intervention at the index level (survivorship), and a composite score comprising these as well as serious device-related adverse events. Trial endpoint comparisons were made at 24 months postoperatively. For each endpoint, a random-effects meta-analysis was performed to compare the success rates of cervical arthroplasty with anterior cervical discectomy and fusion (ACDF). Also, supportive frequentist and bayesian analyses were performed. The pooled primary overall success results indicated a statistically significant treatment effect favoring arthroplasty compared with ACDF. Overall success was achieved by 77.6% of the arthroplasty patients and by 70.8% of the ACDF patients (pooled odds ratio [OR]: 0.699, 95% confidence interval [CI]: 0.539-0.908, P = 0.007). The results of the individual subcomponent meta-analyses, all of which favored arthroplasty, were neck disability index success (OR: 0.786, 95% CI: 0.589-1.050, P = 0.103), neurological status (OR: 0.552, 95% CI: 0.364-0.835, P = 0.005), and survivorship (OR: 0.510, 95% CI: 0.275-0.946, P = 0.033). Only the survivorship endpoint suggested low heterogeneity. These findings suggest that cervical arthroplasty is superior to ACDF in overall success, neurological success, and survivorship outcomes at 24 months postoperatively.

  13. Estimating size and scope economies in the Portuguese water sector using the Bayesian stochastic frontier analysis.

    PubMed

    Carvalho, Pedro; Marques, Rui Cunha

    2016-02-15

    This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Cost-effectiveness analysis using data from multinational trials: The use of bivariate hierarchical modelling

    PubMed Central

    Manca, Andrea; Lambert, Paul C; Sculpher, Mark; Rice, Nigel

    2008-01-01

    Healthcare cost-effectiveness analysis (CEA) often uses individual patient data (IPD) from multinational randomised controlled trials. Although designed to account for between-patient sampling variability in the clinical and economic data, standard analytical approaches to CEA ignore the presence of between-location variability in the study results. This is a restrictive limitation given that countries often differ in factors that could affect the results of CEAs, such as the availability of healthcare resources, their unit costs, clinical practice, and patient case-mix. We advocate the use of Bayesian bivariate hierarchical modelling to analyse multinational cost-effectiveness data. This analytical framework explicitly recognises that patient-level costs and outcomes are nested within countries. Using real life data, we illustrate how the proposed methods can be applied to obtain (a) more appropriate estimates of overall cost-effectiveness and associated measure of sampling uncertainty compared to standard CEA; and (b) country-specific cost-effectiveness estimates which can be used to assess the between-location variability of the study results, while controlling for differences in country-specific and patient-specific characteristics. It is demonstrated that results from standard CEA using IPD from multinational trials display a large degree of variability across the 17 countries included in the analysis, producing potentially misleading results. In contrast, ‘shrinkage estimates’ obtained from the modelling approach proposed here facilitate the appropriate quantification of country-specific cost-effectiveness estimates, while weighting the results based on the level of information available within each country. We suggest that the methods presented here represent a general framework for the analysis of economic data collected from different locations. PMID:17641141

  15. Are beta-blockers effective for preventing post-coronary artery bypass grafting atrial fibrillation? Direct and network meta-analyses.

    PubMed

    Ji, T; Feng, C; Sun, L; Ye, X; Bai, Y; Chen, Q; Qin, Y; Zhu, J; Zhao, X

    2016-05-01

    Atrial fibrillation is the most common arrhythmia in clinical practice and is a major contributor to mortality. Recently, several studies have reported different results for treatments aimed at reducing the risk of postoperative AF. The aim of this study was to evaluate the efficacy of beta-blockers (BBs) in preventing post-coronary artery bypass grafting (CABG) AF and to compare the efficacies of different BB treatments using a network meta-analytical approach. The PubMed, EMBASE and Cochrane Library databases were searched (Jan 1995 to May 2014) to identify randomized controlled trials. Two independent investigators separately extracted the data using a seven-point scoring system to assess randomization, allocation concealment, blinding, withdrawals and dropouts. A direct meta-analysis of these randomized controlled trials was conducted. Then, six trials comparing different BB treatments for the prevention of postoperative AF were added to perform a Bayesian network meta-analysis with mixed treatment comparisons. Treatment with BBs was associated with a significant reduction in the postoperative incidence of AF compared with placebo/control [22.37 % compared with 34.45 %, relative risk (RR) = 0.53, 95 % confidence interval (CI): 0.37-0.75, p < 0.00001]. The network meta-analysis revealed no significant differences among eight types of BB treatments but did provide a ranking. BB treatments could significantly reduce the occurrence of post-CABG AF. Insufficient evidence was available to show that one BB treatment was more effective than the others were. According to our network meta-analysis, bisoprolol and landiolol+bisoprolol are better alternatives compared with the other treatments.

  16. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  17. Bayesian evaluation of clinical diagnostic test characteristics of visual observations and remote monitoring to diagnose bovine respiratory disease in beef calves.

    PubMed

    White, Brad J; Goehl, Dan R; Amrine, David E; Booker, Calvin; Wildman, Brian; Perrett, Tye

    2016-04-01

    Accurate diagnosis of bovine respiratory disease (BRD) in beef cattle is a critical facet of therapeutic programs through promotion of prompt treatment of diseased calves in concert with judicious use of antimicrobials. Despite the known inaccuracies, visual observation (VO) of clinical signs is the conventional diagnostic modality for BRD diagnosis. Objective methods of remotely monitoring cattle wellness could improve diagnostic accuracy; however, little information exists describing the accuracy of this method compared to traditional techniques. The objective of this research is to employ Bayesian methodology to elicit diagnostic characteristics of conventional VO compared to remote early disease identification (REDI) to diagnose BRD. Data from previous literature on the accuracy of VO were combined with trial data consisting of direct comparison between VO and REDI for BRD in two populations. No true gold standard diagnostic test exists for BRD; therefore, estimates of diagnostic characteristics of each test were generated using Bayesian latent class analysis. Results indicate a 90.0% probability that the sensitivity of REDI (median 81.3%; 95% probability interval [PI]: 55.5, 95.8) was higher than VO sensitivity (64.5%; PI: 57.9, 70.8). The specificity of REDI (median 92.9%; PI: 88.2, 96.9) was also higher compared to VO (median 69.1%; PI: 66.3, 71.8). The differences in sensitivity and specificity resulted in REDI exhibiting higher positive and negative predictive values in both high (41.3%) and low (2.6%) prevalence situations. This research illustrates the potential of remote cattle monitoring to augment conventional methods of BRD diagnosis resulting in more accurate identification of diseased cattle. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Design of the exhale airway stents for emphysema (EASE) trial: an endoscopic procedure for reducing hyperinflation

    PubMed Central

    2011-01-01

    Background Airway Bypass is a catheter-based, bronchoscopic procedure in which new passageways are created that bypass the collapsed airways, enabling trapped air to exit the lungs. The Exhale Airway Stents for Emphysema (EASE) Trial was designed to investigate whether Exhale® Drug-Eluting Stents, placed in new passageways in the lungs, can improve pulmonary function and reduce breathlessness in severely hyperinflated, homogeneous emphysema patients (NCT00391612). Methods/Design The multi-center, randomized, double-blind, sham-controlled trial design was posted on http://www.clinicaltrials.gov in October 2006. Because Bayesian statistics are used for the analysis, the proposed enrollment ranged from 225 up to 450 subjects at up to 45 institutions. Inclusion criteria are: high resolution CT scan with evidence of homogeneous emphysema, post-bronchodilator pulmonary function tests showing: a ratio of FEV1/FVC < 70%, FEV1≤50% of predicted or FEV1 < 1 liter, RV/TLC≥0.65 at screening, marked dyspnea score ≥2 on the modified Medical Research Council scale of 0-4, a smoking history of at least 20 pack years and stopped smoking for at least 8 weeks prior to enrollment. Following 16 to 20 supervised pulmonary rehabilitation sessions, subjects were randomized 2:1 to receive either a treatment (Exhale® Drug-Eluting Stent) or a sham bronchoscopy. A responder analysis will evaluate the co-primary endpoints of an FVC improvement ≥12% of the patient baseline value and modified Medical Research Council dyspnea scale improvement (reduction) ≥1 point at the 6-month follow-up visit. Discussion If through the EASE Trial, Airway Bypass is shown to improve pulmonary function and reduce dyspnea while demonstrating an acceptable safety profile, then homogeneous patients will have a minimally invasive treatment option with meaningful clinical benefit. Trial Registration ClinicalTrials.gov: NCT00391612 PMID:21214899

  19. HIV-1 transmission linkage in an HIV-1 prevention clinical trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leitner, Thomas; Campbell, Mary S; Mullins, James I

    2009-01-01

    HIV-1 sequencing has been used extensively in epidemiologic and forensic studies to investigate patterns of HIV-1 transmission. However, the criteria for establishing genetic linkage between HIV-1 strains in HIV-1 prevention trials have not been formalized. The Partners in Prevention HSV/HIV Transmission Study (ClinicaITrials.gov NCT00194519) enrolled 3408 HIV-1 serodiscordant heterosexual African couples to determine the efficacy of genital herpes suppression with acyclovir in reducing HIV-1 transmission. The trial analysis required laboratory confirmation of HIV-1 linkage between enrolled partners in couples in which seroconversion occurred. Here we describe the process and results from HIV-1 sequencing studies used to perform transmission linkage determinationmore » in this clinical trial. Consensus Sanger sequencing of env (C2-V3-C3) and gag (p17-p24) genes was performed on plasma HIV-1 RNA from both partners within 3 months of seroconversion; env single molecule or pyrosequencing was also performed in some cases. For linkage, we required monophyletic clustering between HIV-1 sequences in the transmitting and seroconverting partners, and developed a Bayesian algorithm using genetic distances to evaluate the posterior probability of linkage of participants sequences. Adjudicators classified transmissions as linked, unlinked, or indeterminate. Among 151 seroconversion events, we found 108 (71.5%) linked, 40 (26.5%) unlinked, and 3 (2.0%) to have indeterminate transmissions. Nine (8.3%) were linked by consensus gag sequencing only and 8 (7.4%) required deep sequencing of env. In this first use of HIV-1 sequencing to establish endpoints in a large clinical trial, more than one-fourth of transmissions were unlinked to the enrolled partner, illustrating the relevance of these methods in the design of future HIV-1 prevention trials in serodiscordant couples. A hierarchy of sequencing techniques, analysis methods, and expert adjudication contributed to the linkage determination process.« less

  20. A flexible cure rate model for spatially correlated survival data based on generalized extreme value distribution and Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Dey, Dipak K

    2016-09-01

    Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Bayesian Group Bridge for Bi-level Variable Selection.

    PubMed

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

  2. Bayesian analysis of CCDM models

    NASA Astrophysics Data System (ADS)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  3. A Bayesian network meta-analysis for binary outcome: how to do it.

    PubMed

    Greco, Teresa; Landoni, Giovanni; Biondi-Zoccai, Giuseppe; D'Ascenzo, Fabrizio; Zangrillo, Alberto

    2016-10-01

    This study presents an overview of conceptual and practical issues of a network meta-analysis (NMA), particularly focusing on its application to randomised controlled trials with a binary outcome of interest. We start from general considerations on NMA to specifically appraise how to collect study data, structure the analytical network and specify the requirements for different models and parameter interpretations, with the ultimate goal of providing physicians and clinician-investigators a practical tool to understand pros and cons of NMA. Specifically, we outline the key steps, from the literature search to sensitivity analysis, necessary to perform a valid NMA of binomial data, exploiting Markov Chain Monte Carlo approaches. We also apply this analytical approach to a case study on the beneficial effects of volatile agents compared to total intravenous anaesthetics for surgery to further clarify the statistical details of the models, diagnostics and computations. Finally, datasets and models for the freeware WinBUGS package are presented for the anaesthetic agent example. © The Author(s) 2013.

  4. Bayesian Correlation Analysis for Sequence Count Data

    PubMed Central

    Lau, Nelson; Perkins, Theodore J.

    2016-01-01

    Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities’ measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low—especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities’ signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset. PMID:27701449

  5. Online Variational Bayesian Filtering-Based Mobile Target Tracking in Wireless Sensor Networks

    PubMed Central

    Zhou, Bingpeng; Chen, Qingchun; Li, Tiffany Jing; Xiao, Pei

    2014-01-01

    The received signal strength (RSS)-based online tracking for a mobile node in wireless sensor networks (WSNs) is investigated in this paper. Firstly, a multi-layer dynamic Bayesian network (MDBN) is introduced to characterize the target mobility with either directional or undirected movement. In particular, it is proposed to employ the Wishart distribution to approximate the time-varying RSS measurement precision's randomness due to the target movement. It is shown that the proposed MDBN offers a more general analysis model via incorporating the underlying statistical information of both the target movement and observations, which can be utilized to improve the online tracking capability by exploiting the Bayesian statistics. Secondly, based on the MDBN model, a mean-field variational Bayesian filtering (VBF) algorithm is developed to realize the online tracking of a mobile target in the presence of nonlinear observations and time-varying RSS precision, wherein the traditional Bayesian filtering scheme cannot be directly employed. Thirdly, a joint optimization between the real-time velocity and its prior expectation is proposed to enable online velocity tracking in the proposed online tacking scheme. Finally, the associated Bayesian Cramer–Rao Lower Bound (BCRLB) analysis and numerical simulations are conducted. Our analysis unveils that, by exploiting the potential state information via the general MDBN model, the proposed VBF algorithm provides a promising solution to the online tracking of a mobile node in WSNs. In addition, it is shown that the final tracking accuracy linearly scales with its expectation when the RSS measurement precision is time-varying. PMID:25393784

  6. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  7. Bayesian model reduction and empirical Bayes for group (DCM) studies.

    PubMed

    Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter

    2016-03-01

    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Maritime Transportation Risk Assessment of Tianjin Port with Bayesian Belief Networks.

    PubMed

    Zhang, Jinfen; Teixeira, Ângelo P; Guedes Soares, C; Yan, Xinping; Liu, Kezhong

    2016-06-01

    This article develops a Bayesian belief network model for the prediction of accident consequences in the Tianjin port. The study starts with a statistical analysis of historical accident data of six years from 2008 to 2013. Then a Bayesian belief network is constructed to express the dependencies between the indicator variables and accident consequences. The statistics and expert knowledge are synthesized in the Bayesian belief network model to obtain the probability distribution of the consequences. By a sensitivity analysis, several indicator variables that have influence on the consequences are identified, including navigational area, ship type and time of the day. The results indicate that the consequences are most sensitive to the position where the accidents occurred, followed by time of day and ship length. The results also reflect that the navigational risk of the Tianjin port is at the acceptable level, despite that there is more room of improvement. These results can be used by the Maritime Safety Administration to take effective measures to enhance maritime safety in the Tianjin port. © 2016 Society for Risk Analysis.

  9. Risk Assessment for Mobile Systems Through a Multilayered Hierarchical Bayesian Network.

    PubMed

    Li, Shancang; Tryfonas, Theo; Russell, Gordon; Andriotis, Panagiotis

    2016-08-01

    Mobile systems are facing a number of application vulnerabilities that can be combined together and utilized to penetrate systems with devastating impact. When assessing the overall security of a mobile system, it is important to assess the security risks posed by each mobile applications (apps), thus gaining a stronger understanding of any vulnerabilities present. This paper aims at developing a three-layer framework that assesses the potential risks which apps introduce within the Android mobile systems. A Bayesian risk graphical model is proposed to evaluate risk propagation in a layered risk architecture. By integrating static analysis, dynamic analysis, and behavior analysis in a hierarchical framework, the risks and their propagation through each layer are well modeled by the Bayesian risk graph, which can quantitatively analyze risks faced to both apps and mobile systems. The proposed hierarchical Bayesian risk graph model offers a novel way to investigate the security risks in mobile environment and enables users and administrators to evaluate the potential risks. This strategy allows to strengthen both app security as well as the security of the entire system.

  10. Embedding the results of focussed Bayesian fusion into a global context

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael

    2014-05-01

    Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.

  11. Development of a clinical decision model for thyroid nodules

    PubMed Central

    Stojadinovic, Alexander; Peoples, George E; Libutti, Steven K; Henry, Leonard R; Eberhardt, John; Howard, Robin S; Gur, David; Elster, Eric A; Nissan, Aviram

    2009-01-01

    Background Thyroid nodules represent a common problem brought to medical attention. Four to seven percent of the United States adult population (10–18 million people) has a palpable thyroid nodule, however the majority (>95%) of thyroid nodules are benign. While, fine needle aspiration remains the most cost effective and accurate diagnostic tool for thyroid nodules in current practice, over 20% of patients undergoing FNA of a thyroid nodule have indeterminate cytology (follicular neoplasm) with associated malignancy risk prevalence of 20–30%. These patients require thyroid lobectomy/isthmusectomy purely for the purpose of attaining a definitive diagnosis. Given that the majority (70–80%) of these patients have benign surgical pathology, thyroidectomy in these patients is conducted principally with diagnostic intent. Clinical models predictive of malignancy risk are needed to support treatment decisions in patients with thyroid nodules in order to reduce morbidity associated with unnecessary diagnostic surgery. Methods Data were analyzed from a completed prospective cohort trial conducted over a 4-year period involving 216 patients with thyroid nodules undergoing ultrasound (US), electrical impedance scanning (EIS) and fine needle aspiration cytology (FNA) prior to thyroidectomy. A Bayesian model was designed to predict malignancy in thyroid nodules based on multivariate dependence relationships between independent covariates. Ten-fold cross-validation was performed to estimate classifier error wherein the data set was randomized into ten separate and unique train and test sets consisting of a training set (90% of records) and a test set (10% of records). A receiver-operating-characteristics (ROC) curve of these predictions and area under the curve (AUC) were calculated to determine model robustness for predicting malignancy in thyroid nodules. Results Thyroid nodule size, FNA cytology, US and EIS characteristics were highly predictive of malignancy. Cross validation of the model created with Bayesian Network Analysis effectively predicted malignancy [AUC = 0.88 (95%CI: 0.82–0.94)] in thyroid nodules. The positive and negative predictive values of the model are 83% (95%CI: 76%–91%) and 79% (95%CI: 72%–86%), respectively. Conclusion An integrated predictive decision model using Bayesian inference incorporating readily obtainable thyroid nodule measures is clinically relevant, as it effectively predicts malignancy in thyroid nodules. This model warrants further validation testing in prospective clinical trials. PMID:19664278

  12. Phylogeny of sipunculan worms: A combined analysis of four gene regions and morphology.

    PubMed

    Schulze, Anja; Cutler, Edward B; Giribet, Gonzalo

    2007-01-01

    The intra-phyletic relationships of sipunculan worms were analyzed based on DNA sequence data from four gene regions and 58 morphological characters. Initially we analyzed the data under direct optimization using parsimony as optimality criterion. An implied alignment resulting from the direct optimization analysis was subsequently utilized to perform a Bayesian analysis with mixed models for the different data partitions. For this we applied a doublet model for the stem regions of the 18S rRNA. Both analyses support monophyly of Sipuncula and most of the same clades within the phylum. The analyses differ with respect to the relationships among the major groups but whereas the deep nodes in the direct optimization analysis generally show low jackknife support, they are supported by 100% posterior probability in the Bayesian analysis. Direct optimization has been useful for handling sequences of unequal length and generating conservative phylogenetic hypotheses whereas the Bayesian analysis under mixed models provided high resolution in the basal nodes of the tree.

  13. Facial expression movement enhances the measurement of temporal dynamics of attentional bias in the dot-probe task.

    PubMed

    Caudek, Corrado; Ceccarini, Francesco; Sica, Claudio

    2017-08-01

    The facial dot-probe task is one of the most common experimental paradigms used to assess attentional bias toward emotional information. In recent years, however, the psychometric properties of this paradigm have been questioned. In the present study, attentional bias to emotional face stimuli was measured with dynamic and static images of realistic human faces in 97 college students (63 women) who underwent either a positive or a negative mood-induction prior to the experiment. We controlled the bottom-up salience of the stimuli in order to dissociate the top-down orienting of attention from the effects of the bottom-up physical properties of the stimuli. A Bayesian analysis of our results indicates that 1) the traditional global attentional bias index shows a low reliability, 2) reliability increases dramatically when biased attention is analyzed by extracting a series of bias estimations from trial-to-trial (Zvielli, Bernstein, & Koster, 2015), 3) dynamic expression of emotions strengthens biased attention to emotional information, and 4) mood-congruency facilitates the measurement of biased attention to emotional stimuli. These results highlight the importance of using ecologically valid stimuli in attentional bias research, together with the importance of estimating biased attention at the trial level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Functional mechanisms of probabilistic inference in feature- and space-based attentional systems.

    PubMed

    Dombert, Pascasie L; Kuhns, Anna; Mengotti, Paola; Fink, Gereon R; Vossel, Simone

    2016-11-15

    Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Predicting analysis time in events-driven clinical trials using accumulating time-to-event surrogate information.

    PubMed

    Wang, Jianming; Ke, Chunlei; Yu, Zhinuan; Fu, Lei; Dornseif, Bruce

    2016-05-01

    For clinical trials with time-to-event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre-specified number of deaths. Often, correlated surrogate information, such as time-to-progression (TTP) and progression-free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression-free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Bayesian models: A statistical primer for ecologists

    USGS Publications Warehouse

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  17. Semiparametric Thurstonian Models for Recurrent Choices: A Bayesian Analysis

    ERIC Educational Resources Information Center

    Ansari, Asim; Iyengar, Raghuram

    2006-01-01

    We develop semiparametric Bayesian Thurstonian models for analyzing repeated choice decisions involving multinomial, multivariate binary or multivariate ordinal data. Our modeling framework has multiple components that together yield considerable flexibility in modeling preference utilities, cross-sectional heterogeneity and parameter-driven…

  18. Impact of Viral Status on Survival in Patients Receiving Sorafenib for Advanced Hepatocellular Cancer: A Meta-Analysis of Randomized Phase III Trials.

    PubMed

    Jackson, Richard; Psarelli, Eftychia-Eirini; Berhane, Sarah; Khan, Harun; Johnson, Philip

    2017-02-20

    Purpose Following the Sorafenib Hepatocellular Carcinoma Assessment Randomized Protocol (SHARP) trial, sorafenib has become the standard of care for patients with advanced unresectable hepatocellular carcinoma, but the relation between survival advantage and disease etiology remains unclear. To address this, we undertook an individual patient data meta-analysis of three large prospective randomized trials in which sorafenib was the control arm. Methods Of a total of 3,256 patients, 1,643 (50%) who received sorafenib were available. The primary end point was overall survival (OS). A Bayesian hierarchical approach for individual patient data meta-analyses was applied using a piecewise exponential model. Results are presented in terms of hazard ratios comparing sorafenib with alternative therapies according to hepatitis C virus (HCV) or hepatitis B virus (HBV) status. Results Hazard ratios show improved OS for sorafenib in patients who are both HBV negative and HCV positive (log [hazard ratio], -0.27; 95% CI, -0.46 to -0.06). Median unadjusted survival is 12.6 (11.15 to 13.8) months for sorafenib and 10.2 (8.88 to 12.2) months for "other" treatments in this subgroup. There was no evidence of improvement in OS for any other patient subgroups defined by HBV and HCV. Results were consistent across all trials with heterogeneity assessed using Cochran's Q statistic. Conclusion There is consistent evidence that the effect of sorafenib on OS is dependent on patients' hepatitis status. There is an improved OS for patients negative for HBV and positive for HCV when treated with sorafenib. There was no evidence of any improvement in OS attributable to sorafenib for patients positive for HBV and negative for HCV.

  19. Development of a practical approach to expert elicitation for randomised controlled trials with missing health outcomes: Application to the IMPROVE trial

    PubMed Central

    Mason, Alexina J; Gomes, Manuel; Grieve, Richard; Ulug, Pinar; Powell, Janet T; Carpenter, James

    2017-01-01

    Background/aims: The analyses of randomised controlled trials with missing data typically assume that, after conditioning on the observed data, the probability of missing data does not depend on the patient’s outcome, and so the data are ‘missing at random’ . This assumption is usually implausible, for example, because patients in relatively poor health may be more likely to drop out. Methodological guidelines recommend that trials require sensitivity analysis, which is best informed by elicited expert opinion, to assess whether conclusions are robust to alternative assumptions about the missing data. A major barrier to implementing these methods in practice is the lack of relevant practical tools for eliciting expert opinion. We develop a new practical tool for eliciting expert opinion and demonstrate its use for randomised controlled trials with missing data. Methods: We develop and illustrate our approach for eliciting expert opinion with the IMPROVE trial (ISRCTN 48334791), an ongoing multi-centre randomised controlled trial which compares an emergency endovascular strategy versus open repair for patients with ruptured abdominal aortic aneurysm. In the IMPROVE trial at 3 months post-randomisation, 21% of surviving patients did not complete health-related quality of life questionnaires (assessed by EQ-5D-3L). We address this problem by developing a web-based tool that provides a practical approach for eliciting expert opinion about quality of life differences between patients with missing versus complete data. We show how this expert opinion can define informative priors within a fully Bayesian framework to perform sensitivity analyses that allow the missing data to depend upon unobserved patient characteristics. Results: A total of 26 experts, of 46 asked to participate, completed the elicitation exercise. The elicited quality of life scores were lower on average for the patients with missing versus complete data, but there was considerable uncertainty in these elicited values. The missing at random analysis found that patients randomised to the emergency endovascular strategy versus open repair had higher average (95% credible interval) quality of life scores of 0.062 (−0.005 to 0.130). Our sensitivity analysis that used the elicited expert information as pooled priors found that the gain in average quality of life for the emergency endovascular strategy versus open repair was 0.076 (−0.054 to 0.198). Conclusion: We provide and exemplify a practical tool for eliciting the expert opinion required by recommended approaches to the sensitivity analyses of randomised controlled trials. We show how this approach allows the trial analysis to fully recognise the uncertainty that arises from making alternative, plausible assumptions about the reasons for missing data. This tool can be widely used in the design, analysis and interpretation of future trials, and to facilitate this, materials are available for download. PMID:28675302

  20. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    NASA Astrophysics Data System (ADS)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  1. A Bayesian test for Hardy–Weinberg equilibrium of biallelic X-chromosomal markers

    PubMed Central

    Puig, X; Ginebra, J; Graffelman, J

    2017-01-01

    The X chromosome is a relatively large chromosome, harboring a lot of genetic information. Much of the statistical analysis of X-chromosomal information is complicated by the fact that males only have one copy. Recently, frequentist statistical tests for Hardy–Weinberg equilibrium have been proposed specifically for dealing with markers on the X chromosome. Bayesian test procedures for Hardy–Weinberg equilibrium for the autosomes have been described, but Bayesian work on the X chromosome in this context is lacking. This paper gives the first Bayesian approach for testing Hardy–Weinberg equilibrium with biallelic markers at the X chromosome. Marginal and joint posterior distributions for the inbreeding coefficient in females and the male to female allele frequency ratio are computed, and used for statistical inference. The paper gives a detailed account of the proposed Bayesian test, and illustrates it with data from the 1000 Genomes project. In that implementation, a novel approach to tackle multiple testing from a Bayesian perspective through posterior predictive checks is used. PMID:28900292

  2. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  3. Automated Bayesian model development for frequency detection in biological time series.

    PubMed

    Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J

    2011-06-24

    A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure.

  4. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910

  5. Adaptive Randomization of Neratinib in Early Breast Cancer

    PubMed Central

    Park, John W.; Liu, Minetta C.; Yee, Douglas; Yau, Christina; van 't Veer, Laura J.; Symmans, W. Fraser; Paoloni, Melissa; Perlmutter, Jane; Hylton, Nola M.; Hogarth, Michael; DeMichele, Angela; Buxton, Meredith B.; Chien, A. Jo; Wallace, Anne M.; Boughey, Judy C.; Haddad, Tufia C.; Chui, Stephen Y.; Kemmer, Kathleen A.; Kaplan, Henry G.; Liu, Minetta C.; Isaacs, Claudine; Nanda, Rita; Tripathy, Debasish; Albain, Kathy S.; Edmiston, Kirsten K.; Elias, Anthony D.; Northfelt, Donald W.; Pusztai, Lajos; Moulder, Stacy L.; Lang, Julie E.; Viscusi, Rebecca K.; Euhus, David M.; Haley, Barbara B.; Khan, Qamar J.; Wood, William C.; Melisko, Michelle; Schwab, Richard; Lyandres, Julia; Davis, Sarah E.; Hirst, Gillian L.; Sanil, Ashish; Esserman, Laura J.; Berry, Donald A.

    2017-01-01

    Background I-SPY2, a standing, multicenter, adaptive phase 2 neoadjuvant trial ongoing in high-risk clinical stage II/III breast cancer, is designed to evaluate multiple, novel experimental agents added to standard chemotherapy for their ability to improve the rate of pathologic complete response (pCR). Experimental therapies are compared against a common control arm. We report efficacy for the tyrosine kinase inhibitor neratinib. Methods Eligible women had ≥2.5 cm stage II/III breast cancer, categorized into 8 biomarker subtypes based on HER2, hormone-receptor status (HR), and MammaPrint. Neratinib was evaluated for 10 signatures (prospectively defined subtype combinations), with primary endpoint pCR. MR volume changes inform likelihood of pCR for each patient prior to surgery. Adaptive assignment to experimental arms within disease subtype was based on current Bayesian probabilities of superiority over control. Accrual to experimental arm stop at any time for futility or graduation within a particular signature based on Bayesian predictive probability of success in a confirmatory trial. The maximum sample size in any experimental arm is 120 patients, Results With 115 patients and 78 concurrently randomized controls, neratinib graduated in the HER2+/HR− signature, with mean pCR rate 56% (95% PI: 37 to 73%) vs 33% for controls (11 to 54%). Final predictive probability of success, updated when all pathology data were available, was 79%. Conclusion Adaptive, multi-armed trials can efficiently identify responding tumor subtypes. Neratinib added to standard therapy is highly likely to improve pCR rates in HER2+/HR2212; breast cancer. Confirmation in I-SPY 3, a phase 3 neoadjuvant registration trial, is planned. PMID:27406346

  6. Anhedonia and anxiety underlying depressive symptomatology have distinct effects on reward-based decision-making.

    PubMed

    Harlé, Katia M; Guo, Dalin; Zhang, Shunan; Paulus, Martin P; Yu, Angela J

    2017-01-01

    Depressive pathology, which includes both heightened negative affect (e.g., anxiety) and reduced positive affect (e.g., anhedonia), is known to be associated with sub-optimal decision-making, particularly in uncertain environments. Here, we use a computational approach to quantify and disambiguate how individual differences in these affective measures specifically relate to different aspects of learning and decision-making in reward-based choice behavior. Fifty-three individuals with a range of depressed mood completed a two-armed bandit task, in which they choose between two arms with fixed but unknown reward rates. The decision-making component, which chooses among options based on current expectations about reward rates, is modeled by two different decision policies: a learning-independent Win-stay/Lose-shift (WSLS) policy that ignores all previous experiences except the last trial, and Softmax, which prefers the arm with the higher expected reward. To model the learning component for the Softmax choice policy, we use a Bayesian inference model, which updates estimated reward rates based on the observed history of trial outcomes. Softmax with Bayesian learning better fits the behavior of 55% of the participants, while the others are better fit by a learning-independent WSLS strategy. Among Softmax "users", those with higher anhedonia are less likely to choose the option estimated to be most rewarding. Moreover, the Softmax parameter mediates the inverse relationship between anhedonia and overall monetary gains. On the other hand, among WSLS "users", higher state anxiety correlates with increasingly better ability of WSLS, relative to Softmax, to explain subjects' trial-by-trial choices. In summary, there is significant variability among individuals in their reward-based, exploratory decision-making, and this variability is at least partly mediated in a very specific manner by affective attributes, such as hedonic tone and state anxiety.

  7. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  8. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  9. Effects of intranasal oxytocin on symptoms of schizophrenia: A multivariate Bayesian meta-analysis.

    PubMed

    Williams, Donald R; Bürkner, Paul-Christian

    2017-01-01

    Schizophrenia is a heterogeneous disorder in which psychiatric symptoms are classified into two general subgroups-positive and negative symptoms. Current antipsychotic drugs are effective for treating positive symptoms, whereas negative symptoms are less responsive. Since the neuropeptide oxytocin (OT) has been shown to mediate social behavior in animals and humans, it has been used as an experimental therapeutic for treating schizophrenia and in particular negative symptoms which includes social deficits. Through eight randomized controlled trials (RCTs) and three meta-analyses, evidence for an effect of intranasal OT (IN-OT) has been inconsistent. We therefore conducted an updated meta-analysis that offers several advantages when compared to those done previously: (1) We used a multivariate analysis which allows for comparisons between symptoms and accounts for correlations between symptoms; (2) We controlled for baseline scores; (3) We used a fully Bayesian framework that allows for assessment of evidence in favor of the null hypothesis using Bayes factors; and (4) We addressed inconsistencies in the primary studies and previous meta-analyses. Eight RCTs (n=238) were included in the present study and we found that oxytocin did not improve any aspect of symptomology in schizophrenic patients and there was moderate evidence in favor of the null (no effect of oxytocin) for negative symptoms. Multivariate comparisons between symptom types revealed that oxytocin was not especially beneficial for treating negative symptoms. The effect size estimates were not moderated, publication bias was absent, and our estimates were robust to sensitivity analyses. These results suggest that IN-OT is not an effective therapeutic for schizophrenia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Benefits of varenicline vs. bupropion for smoking cessation: a Bayesian analysis of the interaction of reward sensitivity and treatment

    PubMed Central

    Green, Charles E.; Robinson, Jason D.; Karam-Hage, Maher; Engelmann, Jeffrey M.; Minnix, Jennifer A.; Wetter, David W.; Versace, Francesco

    2018-01-01

    Rationale We have shown that differences in the level of neural activation to stimuli associated with smoking vs. natural rewards, a biomarker related to reward sensitivity, predict treatment outcome. Objectives This paper examined whether this biomarker moderates the impact of bupropion or varenicline on smoking cessation. Methods Prior to treatment randomization, smokers (N = 180) in a placebo-controlled trial using bupropion and varenicline completed event-related potential recording (late positive potential, LPP) while viewing pleasant (P), cigarette (C)-related, and other pictures. We used Bayesian models to estimate the probability of interaction between treatment and the LPP for both efficacy and comparative effectiveness analyses. Results Efficacy analysis showed that smokers with more neural activation to pleasant vs. cigarette-related stimuli (P > C) had a 98–99% chance of achieving greater abstinence than placebo (OR >1.00), using either medication from the end of treatment (EOT, primary outcome) through the 3-month follow-up. Relative to placebo, smokers with higher activation to cigarette-related vs. pleasant stimuli (C > P) had a 99% chance of increased benefit from varenicline at both time points (OR >1), but only 67 and 43% with bupropion at the EOT and 3-month follow-up, respectively. Comparative effectiveness analysis found that smokers with the C > P activation pattern had a 95–98% chance of benefit from varenicline vs. bupropion, while P > C smokers had a 50–58% chance of similar improvement with varenicline at the EOT and 3 months. Conclusions Varenicline appears to be the treatment of choice for smokers with the C > P pattern of neural activation, while for those showing P > C, varenicline and bupropion have similar efficacy. PMID:28275830

  11. Modeling and validating Bayesian accrual models on clinical data and simulations using adaptive priors.

    PubMed

    Jiang, Yu; Simon, Steve; Mayo, Matthew S; Gajewski, Byron J

    2015-02-20

    Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision-making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Systematic review with meta-analysis: comparative efficacy of biologics for induction and maintenance of mucosal healing in Crohn's disease and ulcerative colitis controlled trials.

    PubMed

    Cholapranee, A; Hazlewood, G S; Kaplan, G G; Peyrin-Biroulet, L; Ananthakrishnan, A N

    2017-05-01

    Mucosal healing is an important therapeutic endpoint in the management of Crohn's disease (CD) and ulcerative colitis (UC). Limited data exist regarding the comparative efficacy of various therapies in achieving this outcome. To perform a systematic review and meta-analysis of biologics for induction and maintenance of mucosal healing in Crohn's disease and ulcerative colitis. We performed a systematic review and meta-analysis of randomised controlled trials (RCT) examining mucosal healing as an endpoint of immunosuppressives, anti-tumour necrosis factor α (anti-TNF) or anti-integrin monoclonal antibody therapy for moderate-to-severe CD or UC. Pooled effect sizes for induction and maintenance of mucosal healing were calculated and pairwise treatment comparisons evaluated using a Bayesian network meta-analysis. A total of 12 RCTs were included in the meta-analysis (CD - 2 induction, 4 maintenance; UC - 8 induction, 5 maintenance). Duration of follow-up was 6-12 weeks for induction and 32-54 weeks for maintenance trials. In CD, anti-TNFs were more effective than placebo for maintaining mucosal healing [28% vs. 1%, Odds ratio (OR) 19.71, 95% confidence interval (CI) 3.51-110.84]. In UC, anti-TNFs and anti-integrins were more effective than placebo for inducing (45% vs. 30%) and maintaining mucosal healing (33% vs. 18%). In network analysis, adalimumab therapy was inferior to infliximab [OR 0.45, 95% credible interval (CrI) 0.25-0.82] and combination infliximab-azathioprine (OR 0.32, 95% CrI 0.12-0.84) for inducing mucosal healing in UC. There was no statistically significant pairwise difference between vedolizumab and anti-TNF agents in UC. Anti-TNF and anti-integrin biological agents are effective in inducing mucosal healing in UC, with adalimumab being inferior to infliximab or combination therapy. Infliximab and adalimumab were similar in CD. © 2017 John Wiley & Sons Ltd.

  13. Inferring on the Intentions of Others by Hierarchical Bayesian Learning

    PubMed Central

    Diaconescu, Andreea O.; Mathys, Christoph; Weber, Lilian A. E.; Daunizeau, Jean; Kasper, Lars; Lomakina, Ekaterina I.; Fehr, Ernst; Stephan, Klaas E.

    2014-01-01

    Inferring on others' (potentially time-varying) intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to “player” or “adviser” roles) interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI) scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition. PMID:25187943

  14. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  15. Spatiotemporal dynamics of random stimuli account for trial-to-trial variability in perceptual decision making

    PubMed Central

    Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.

    2016-01-01

    Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272

  16. Multivariate meta-analysis using individual participant data.

    PubMed

    Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R

    2015-06-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.

  17. A Bayesian Multinomial Probit MODEL FOR THE ANALYSIS OF PANEL CHOICE DATA.

    PubMed

    Fong, Duncan K H; Kim, Sunghoon; Chen, Zhe; DeSarbo, Wayne S

    2016-03-01

    A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.

  18. Predictive distributions for between-study heterogeneity and simple methods for their application in Bayesian meta-analysis

    PubMed Central

    Turner, Rebecca M; Jackson, Dan; Wei, Yinghui; Thompson, Simon G; Higgins, Julian P T

    2015-01-01

    Numerous meta-analyses in healthcare research combine results from only a small number of studies, for which the variance representing between-study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta-analysis. We present two methods for implementing Bayesian meta-analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta-analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta-analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log-normal distributions for the between-study variance, applicable to meta-analyses of binary outcomes on the log odds-ratio scale. The methods are applied to two example meta-analyses, incorporating the relevant predictive distributions as prior distributions for between-study heterogeneity. We have provided resources to facilitate Bayesian meta-analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:25475839

  19. A fully Bayesian before-after analysis of permeable friction course (PFC) pavement wet weather safety.

    PubMed

    Buddhavarapu, Prasad; Smit, Andre F; Prozzi, Jorge A

    2015-07-01

    Permeable friction course (PFC), a porous hot-mix asphalt, is typically applied to improve wet weather safety on high-speed roadways in Texas. In order to warrant expensive PFC construction, a statistical evaluation of its safety benefits is essential. Generally, the literature on the effectiveness of porous mixes in reducing wet-weather crashes is limited and often inconclusive. In this study, the safety effectiveness of PFC was evaluated using a fully Bayesian before-after safety analysis. First, two groups of road segments overlaid with PFC and non-PFC material were identified across Texas; the non-PFC or reference road segments selected were similar to their PFC counterparts in terms of site specific features. Second, a negative binomial data generating process was assumed to model the underlying distribution of crash counts of PFC and reference road segments to perform Bayesian inference on the safety effectiveness. A data-augmentation based computationally efficient algorithm was employed for a fully Bayesian estimation. The statistical analysis shows that PFC is not effective in reducing wet weather crashes. It should be noted that the findings of this study are in agreement with the existing literature, although these studies were not based on a fully Bayesian statistical analysis. Our study suggests that the safety effectiveness of PFC road surfaces, or any other safety infrastructure, largely relies on its interrelationship with the road user. The results suggest that the safety infrastructure must be properly used to reap the benefits of the substantial investments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less

  1. Bayesian Techniques for Plasma Theory to Bridge the Gap Between Space and Lab Plasmas

    NASA Astrophysics Data System (ADS)

    Crabtree, Chris; Ganguli, Gurudas; Tejero, Erik

    2017-10-01

    We will show how Bayesian techniques provide a general data analysis methodology that is better suited to investigate phenomena that require a nonlinear theory for an explanation. We will provide short examples of how Bayesian techniques have been successfully used in the radiation belts to provide precise nonlinear spectral estimates of whistler mode chorus and how these techniques have been verified in laboratory plasmas. We will demonstrate how Bayesian techniques allow for the direct competition of different physical theories with data acting as the necessary arbitrator. This work is supported by the Naval Research Laboratory base program and by the National Aeronautics and Space Administration under Grant No. NNH15AZ90I.

  2. Bayesian just-so stories in psychology and neuroscience.

    PubMed

    Bowers, Jeffrey S; Davis, Colin J

    2012-05-01

    According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian "just so" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal. 2012 APA, all rights reserved.

  3. Evaluation of a Partial Genome Screening of Two Asthma Susceptibility Regions Using Bayesian Network Based Bayesian Multilevel Analysis of Relevance

    PubMed Central

    Antal, Péter; Kiszel, Petra Sz.; Gézsi, András; Hadadi, Éva; Virág, Viktor; Hajós, Gergely; Millinghoffer, András; Nagy, Adrienne; Kiss, András; Semsei, Ágnes F.; Temesi, Gergely; Melegh, Béla; Kisfali, Péter; Széll, Márta; Bikov, András; Gálffy, Gabriella; Tamási, Lilla; Falus, András; Szalai, Csaba

    2012-01-01

    Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called Bayesian network based Bayesian multilevel analysis of relevance (BN-BMLA). This method uses Bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the Bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated. With frequentist methods one SNP (rs3751464 in the FRMD6 gene) provided evidence for an association with asthma (OR = 1.43(1.2–1.8); p = 3×10−4). The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics. In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance. PMID:22432035

  4. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  5. Robust Bayesian Analysis of Heavy-tailed Stochastic Volatility Models using Scale Mixtures of Normal Distributions

    PubMed Central

    Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.

    2009-01-01

    A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043

  6. Bayesian Analysis of the Association between Family-Level Factors and Siblings' Dental Caries.

    PubMed

    Wen, A; Weyant, R J; McNeil, D W; Crout, R J; Neiswanger, K; Marazita, M L; Foxman, B

    2017-07-01

    We conducted a Bayesian analysis of the association between family-level socioeconomic status and smoking and the prevalence of dental caries among siblings (children from infant to 14 y) among children living in rural and urban Northern Appalachia using data from the Center for Oral Health Research in Appalachia (COHRA). The observed proportion of siblings sharing caries was significantly different from predicted assuming siblings' caries status was independent. Using a Bayesian hierarchical model, we found the inclusion of a household factor significantly improved the goodness of fit. Other findings showed an inverse association between parental education and siblings' caries and a positive association between households with smokers and siblings' caries. Our study strengthens existing evidence suggesting that increased parental education and decreased parental cigarette smoking are associated with reduced childhood caries in the household. Our results also demonstrate the value of a Bayesian approach, which allows us to include household as a random effect, thereby providing more accurate estimates than obtained using generalized linear mixed models.

  7. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  8. Bayesian Estimation of the True Prevalence and of the Diagnostic Test Sensitivity and Specificity of Enteropathogenic Yersinia in Finnish Pig Serum Samples.

    PubMed

    Vilar, M J; Ranta, J; Virtanen, S; Korkeala, H

    2015-01-01

    Bayesian analysis was used to estimate the pig's and herd's true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2-70.9). The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3-82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.

  9. A Bayesian Approach to a Multiple-Group Latent Class-Profile Analysis: The Timing of Drinking Onset and Subsequent Drinking Behaviors among U.S. Adolescents

    ERIC Educational Resources Information Center

    Chung, Hwan; Anthony, James C.

    2013-01-01

    This article presents a multiple-group latent class-profile analysis (LCPA) by taking a Bayesian approach in which a Markov chain Monte Carlo simulation is employed to achieve more robust estimates for latent growth patterns. This article describes and addresses a label-switching problem that involves the LCPA likelihood function, which has…

  10. Bayesian Logic Programs for Plan Recognition and Machine Reading

    DTIC Science & Technology

    2012-12-01

    models is that they can handle both uncertainty and structured/ relational data. As a result, they are widely used in domains like social network...data. As a result, they are widely used in domains like social net- work analysis, biological data analysis, and natural language processing. Bayesian...the Story Understanding data set. (b) The logical representation of the observations. (c) The set of ground rules obtained from logical abduction

  11. The ABC of non-inferiority margin setting from indirect comparisons.

    PubMed

    Julious, Steven A

    2011-01-01

    In a non-inferiority trial to assess a new investigative treatment, there may need to be consideration of an indirect comparison with placebo using the active control in the current trial. We can, therefore, use the fact that there is a common active control in the comparisons of the investigative treatment and placebo. In analysing a non-inferiority trial, the ABC of: Assay sensitivity, Bias minimisation and Constancy assumption needs to be considered. It is highlighted how the ABC assumptions can potentially fail when there is placebo creep or a patient population shift. In this situation, the belief about the placebo response expressed in terms of a prior probability in Bayesian formulation could be used with the observed treatment effects to set the non-inferiority limit. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Bayesian Models for Astrophysical Data Using R, JAGS, Python, and Stan

    NASA Astrophysics Data System (ADS)

    Hilbe, Joseph M.; de Souza, Rafael S.; Ishida, Emille E. O.

    2017-05-01

    This comprehensive guide to Bayesian methods in astronomy enables hands-on work by supplying complete R, JAGS, Python, and Stan code, to use directly or to adapt. It begins by examining the normal model from both frequentist and Bayesian perspectives and then progresses to a full range of Bayesian generalized linear and mixed or hierarchical models, as well as additional types of models such as ABC and INLA. The book provides code that is largely unavailable elsewhere and includes details on interpreting and evaluating Bayesian models. Initial discussions offer models in synthetic form so that readers can easily adapt them to their own data; later the models are applied to real astronomical data. The consistent focus is on hands-on modeling, analysis of data, and interpretations that address scientific questions. A must-have for astronomers, its concrete approach will also be attractive to researchers in the sciences more generally.

  13. Toward an ecological analysis of Bayesian inferences: how task characteristics influence responses

    PubMed Central

    Hafenbrädl, Sebastian; Hoffrage, Ulrich

    2015-01-01

    In research on Bayesian inferences, the specific tasks, with their narratives and characteristics, are typically seen as exchangeable vehicles that merely transport the structure of the problem to research participants. In the present paper, we explore whether, and possibly how, task characteristics that are usually ignored influence participants’ responses in these tasks. We focus on both quantitative dimensions of the tasks, such as their base rates, hit rates, and false-alarm rates, as well as qualitative characteristics, such as whether the task involves a norm violation or not, whether the stakes are high or low, and whether the focus is on the individual case or on the numbers. Using a data set of 19 different tasks presented to 500 different participants who provided a total of 1,773 responses, we analyze these responses in two ways: first, on the level of the numerical estimates themselves, and second, on the level of various response strategies, Bayesian and non-Bayesian, that might have produced the estimates. We identified various contingencies, and most of the task characteristics had an influence on participants’ responses. Typically, this influence has been stronger when the numerical information in the tasks was presented in terms of probabilities or percentages, compared to natural frequencies – and this effect cannot be fully explained by a higher proportion of Bayesian responses when natural frequencies were used. One characteristic that did not seem to influence participants’ response strategy was the numerical value of the Bayesian solution itself. Our exploratory study is a first step toward an ecological analysis of Bayesian inferences, and highlights new avenues for future research. PMID:26300791

  14. Hip fracture in the elderly: a re-analysis of the EPIDOS study with causal Bayesian networks.

    PubMed

    Caillet, Pascal; Klemm, Sarah; Ducher, Michel; Aussem, Alexandre; Schott, Anne-Marie

    2015-01-01

    Hip fractures commonly result in permanent disability, institutionalization or death in elderly. Existing hip-fracture predicting tools are underused in clinical practice, partly due to their lack of intuitive interpretation. By use of a graphical layer, Bayesian network models could increase the attractiveness of fracture prediction tools. Our aim was to study the potential contribution of a causal Bayesian network in this clinical setting. A logistic regression was performed as a standard control approach to check the robustness of the causal Bayesian network approach. EPIDOS is a multicenter study, conducted in an ambulatory care setting in five French cities between 1992 and 1996 and updated in 2010. The study included 7598 women aged 75 years or older, in which fractures were assessed quarterly during 4 years. A causal Bayesian network and a logistic regression were performed on EPIDOS data to describe major variables involved in hip fractures occurrences. Both models had similar association estimations and predictive performances. They detected gait speed and mineral bone density as variables the most involved in the fracture process. The causal Bayesian network showed that gait speed and bone mineral density were directly connected to fracture and seem to mediate the influence of all the other variables included in our model. The logistic regression approach detected multiple interactions involving psychotropic drug use, age and bone mineral density. Both approaches retrieved similar variables as predictors of hip fractures. However, Bayesian network highlighted the whole web of relation between the variables involved in the analysis, suggesting a possible mechanism leading to hip fracture. According to the latter results, intervention focusing concomitantly on gait speed and bone mineral density may be necessary for an optimal prevention of hip fracture occurrence in elderly people.

  15. A comparison of discontinuation rates of tofacitinib and biologic disease-modifying anti-rheumatic drugs in rheumatoid arthritis: a systematic review and Bayesian network meta-analysis.

    PubMed

    Park, Sun-Kyeong; Lee, Min-Young; Jang, Eun-Jin; Kim, Hye-Lin; Ha, Dong-Mun; Lee, Eui-Kyung

    2017-01-01

    The purpose of this study was to compare the discontinuation rates of tofacitinib and biologics (tumour necrosis factor inhibitors (TNFi), abatacept, rituximab, and tocilizumab) in rheumatoid arthritis (RA) patients considering inadequate responses (IRs) to previous treatment(s). Randomised controlled trials of tofacitinib and biologics - reporting at least one total discontinuation, discontinuation due to lack of efficacy (LOE), and discontinuation due to adverse events (AEs) - were identified through systematic review. The analyses were conducted for patients with IRs to conventional synthetic disease-modifying anti-rheumatic drugs (cDMARDs) and for patients with biologics-IR, separately. Bayesian network meta-analysis was used to estimate rate ratio (RR) of a biologic relative to tofacitinib with 95% credible interval (CrI), and probability of RR being <1 (P[RR<1]). The analyses of 34 studies showed no significant differences in discontinuation rates between tofacitinib and biologics in the cDMARDs-IR group. In the biologics-IR group, however, TNFi (RR 0.17, 95% CrI 0.01-3.61, P[RR<1] 92.0%) and rituximab (RR 0.20, 95% CrI 0.01-2.91, P[RR<1] 92.3%) showed significantly lower total discontinuation rates than tofacitinib did. Despite the difference, discontinuation cases owing to LOE and AEs revealed that tofacitinib was comparable to the biologics. The comparability of discontinuation rate between tofacitinib and biologics was different based on previous treatments and discontinuation reasons: LOE, AEs, and total (due to other reasons). Therefore, those factors need to be considered to decide the optimal treatment strategy.

  16. Valuing Trial Designs from a Pharmaceutical Perspective Using Value-Based Pricing.

    PubMed

    Breeze, Penny; Brennan, Alan

    2015-11-01

    Our aim was to adapt the traditional framework for expected net benefit of sampling (ENBS) to be more compatible with drug development trials from the pharmaceutical perspective. We modify the traditional framework for conducting ENBS and assume that the price of the drug is conditional on the trial outcomes. We use a value-based pricing (VBP) criterion to determine price conditional on trial data using Bayesian updating of cost-effectiveness (CE) model parameters. We assume that there is a threshold price below which the company would not market the new intervention. We present a case study in which a phase III trial sample size and trial duration are varied. For each trial design, we sampled 10,000 trial outcomes and estimated VBP using a CE model. The expected commercial net benefit is calculated as the expected profits minus the trial costs. A clinical trial with shorter follow-up, and larger sample size, generated the greatest expected commercial net benefit. Increasing the duration of follow-up had a modest impact on profit forecasts. Expected net benefit of sampling can be adapted to value clinical trials in the pharmaceutical industry to optimise the expected commercial net benefit. However, the analyses can be very time consuming for complex CE models. © 2014 The Authors. Health Economics published by John Wiley & Sons Ltd.

  17. Bayesian flood forecasting methods: A review

    NASA Astrophysics Data System (ADS)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.

  18. Applications of Bayesian Procrustes shape analysis to ensemble radar reflectivity nowcast verification

    NASA Astrophysics Data System (ADS)

    Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang

    2016-07-01

    This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.

  19. Bayesian analysis of non-homogeneous Markov chains: application to mental health data.

    PubMed

    Sung, Minje; Soyer, Refik; Nhan, Nguyen

    2007-07-10

    In this paper we present a formal treatment of non-homogeneous Markov chains by introducing a hierarchical Bayesian framework. Our work is motivated by the analysis of correlated categorical data which arise in assessment of psychiatric treatment programs. In our development, we introduce a Markovian structure to describe the non-homogeneity of transition patterns. In doing so, we introduce a logistic regression set-up for Markov chains and incorporate covariates in our model. We present a Bayesian model using Markov chain Monte Carlo methods and develop inference procedures to address issues encountered in the analyses of data from psychiatric treatment programs. Our model and inference procedures are implemented to some real data from a psychiatric treatment study. Copyright 2006 John Wiley & Sons, Ltd.

  20. A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS

    EPA Science Inventory

    A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...

  1. Uncertainty estimation of a complex water quality model: The influence of Box-Cox transformation on Bayesian approaches and comparison with a non-Bayesian method

    NASA Astrophysics Data System (ADS)

    Freni, Gabriele; Mannina, Giorgio

    In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the residuals distribution. If residuals are not normally distributed, the uncertainty is over-estimated if Box-Cox transformation is not applied or non-calibrated parameter is used.

  2. Evaluation of a multi-arm multi-stage Bayesian design for phase II drug selection trials - an example in hemato-oncology.

    PubMed

    Jacob, Louis; Uvarova, Maria; Boulet, Sandrine; Begaj, Inva; Chevret, Sylvie

    2016-06-02

    Multi-Arm Multi-Stage designs aim at comparing several new treatments to a common reference, in order to select or drop any treatment arm to move forward when such evidence already exists based on interim analyses. We redesigned a Bayesian adaptive design initially proposed for dose-finding, focusing our interest in the comparison of multiple experimental drugs to a control on a binary criterion measure. We redesigned a phase II clinical trial that randomly allocates patients across three (one control and two experimental) treatment arms to assess dropping decision rules. We were interested in dropping any arm due to futility, either based on historical control rate (first rule) or comparison across arms (second rule), and in stopping experimental arm due to its ability to reach a sufficient response rate (third rule), using the difference of response probabilities in Bayes binomial trials between the treated and control as a measure of treatment benefit. Simulations were then conducted to investigate the decision operating characteristics under a variety of plausible scenarios, as a function of the decision thresholds. Our findings suggest that one experimental treatment was less efficient than the control and could have been dropped from the trial based on a sample of approximately 20 instead of 40 patients. In the simulation study, stopping decisions were reached sooner for the first rule than for the second rule, with close mean estimates of response rates and small bias. According to the decision threshold, the mean sample size to detect the required 0.15 absolute benefit ranged from 63 to 70 (rule 3) with false negative rates of less than 2 % (rule 1) up to 6 % (rule 2). In contrast, detecting a 0.15 inferiority in response rates required a sample size ranging on average from 23 to 35 (rules 1 and 2, respectively) with a false positive rate ranging from 3.6 to 0.6 % (rule 3). Adaptive trial design is a good way to improve clinical trials. It allows removing ineffective drugs and reducing the trial sample size, while maintaining unbiased estimates. Decision thresholds can be set according to predefined fixed error decision rates. ClinicalTrials.gov Identifier: NCT01342692 .

  3. Propagation of population pharmacokinetic information using a Bayesian approach: comparison with meta-analysis.

    PubMed

    Dokoumetzidis, Aristides; Aarons, Leon

    2005-08-01

    We investigated the propagation of population pharmacokinetic information across clinical studies by applying Bayesian techniques. The aim was to summarize the population pharmacokinetic estimates of a study in appropriate statistical distributions in order to use them as Bayesian priors in consequent population pharmacokinetic analyses. Various data sets of simulated and real clinical data were fitted with WinBUGS, with and without informative priors. The posterior estimates of fittings with non-informative priors were used to build parametric informative priors and the whole procedure was carried on in a consecutive manner. The posterior distributions of the fittings with informative priors where compared to those of the meta-analysis fittings of the respective combinations of data sets. Good agreement was found, for the simulated and experimental datasets when the populations were exchangeable, with the posterior distribution from the fittings with the prior to be nearly identical to the ones estimated with meta-analysis. However, when populations were not exchangeble an alternative parametric form for the prior, the natural conjugate prior, had to be used in order to have consistent results. In conclusion, the results of a population pharmacokinetic analysis may be summarized in Bayesian prior distributions that can be used consecutively with other analyses. The procedure is an alternative to meta-analysis and gives comparable results. It has the advantage that it is faster than the meta-analysis, due to the large datasets used with the latter and can be performed when the data included in the prior are not actually available.

  4. A Development of Nonstationary Regional Frequency Analysis Model with Large-scale Climate Information: Its Application to Korean Watershed

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Young; Kwon, Hyun-Han; Kim, Hung-Soo

    2015-04-01

    The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, this study aims to develop a hierarchical Bayesian model based nonstationary regional frequency analysis in that spatial patterns of the design rainfall with geographical information (e.g. latitude, longitude and altitude) are explicitly incorporated. This study assumes that the parameters of Gumbel (or GEV distribution) are a function of geographical characteristics within a general linear regression framework. Posterior distribution of the regression parameters are estimated by Bayesian Markov Chain Monte Carlo (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the distributions by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Finally, comprehensive discussion on design rainfall in the context of nonstationary will be presented. KEYWORDS: Regional frequency analysis, Nonstationary, Spatial information, Bayesian Acknowledgement This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  5. Bayesian prediction of placebo analgesia in an instrumental learning model

    PubMed Central

    Jung, Won-Mo; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung

    2017-01-01

    Placebo analgesia can be primarily explained by the Pavlovian conditioning paradigm in which a passively applied cue becomes associated with less pain. In contrast, instrumental conditioning employs an active paradigm that might be more similar to clinical settings. In the present study, an instrumental conditioning paradigm involving a modified trust game in a simulated clinical situation was used to induce placebo analgesia. Additionally, Bayesian modeling was applied to predict the placebo responses of individuals based on their choices. Twenty-four participants engaged in a medical trust game in which decisions to receive treatment from either a doctor (more effective with high cost) or a pharmacy (less effective with low cost) were made after receiving a reference pain stimulus. In the conditioning session, the participants received lower levels of pain following both choices, while high pain stimuli were administered in the test session even after making the decision. The choice-dependent pain in the conditioning session was modulated in terms of both intensity and uncertainty. Participants reported significantly less pain when they chose the doctor or the pharmacy for treatment compared to the control trials. The predicted pain ratings based on Bayesian modeling showed significant correlations with the actual reports from participants for both of the choice categories. The instrumental conditioning paradigm allowed for the active choice of optional cues and was able to induce the placebo analgesia effect. Additionally, Bayesian modeling successfully predicted pain ratings in a simulated clinical situation that fits well with placebo analgesia induced by instrumental conditioning. PMID:28225816

  6. Comparative Effectiveness of Pharmacological Interventions for Severe Alcoholic Hepatitis: A Systematic Review and Network Meta-analysis.

    PubMed

    Singh, Siddharth; Murad, Mohammad Hassan; Chandar, Apoorva K; Bongiorno, Connie M; Singal, Ashwani K; Atkinson, Stephen R; Thursz, Mark R; Loomba, Rohit; Shah, Vijay H

    2015-10-01

    Severe alcoholic hepatitis (AH) has high mortality. We assessed the comparative effectiveness of pharmacological interventions for severe AH, through a network meta-analysis combining direct and indirect treatment comparisons. We conducted a systematic literature review, through February 2015, for randomized controlled trials of adults with severe AH (discriminant function ≥32 and/or hepatic encephalopathy) that compared the efficacy of active pharmacologic interventions (corticosteroids, pentoxifylline, and N-acetylcysteine [NAC], alone or in combination) with each other or placebo, in reducing short-term mortality (primary outcome) and medium-term mortality, acute kidney injury, and/or infections (secondary outcomes). We performed direct and Bayesian network meta-analysis for all treatments, and used Grading of Recommendations Assessment, Development and Evaluation criteria to appraise quality of evidence. We included 22 randomized controlled trials (2621 patients) comparing 5 different interventions. In a direct meta-analysis, only corticosteroids decreased risk of short-term mortality. In a network meta-analysis, moderate quality evidence supported the use of corticosteroids alone (relative risk [RR], 0.54; 95% credible interval [CrI], 0.39-0.73) or in combination with pentoxifylline (RR, 0.53; 95% CrI, 0.36-0.78) or NAC (RR, 0.15; 95% CI, 0.05-0.39), to reduce short-term mortality; low quality evidence showed that pentoxifylline also decreased short-term mortality (RR, 0.70; 95% CrI, 0.50-0.97). The addition of NAC, but not pentoxifylline, to corticosteroids may be superior to corticosteroids alone for reducing short-term mortality. No treatment was effective in reducing medium-term mortality. Imprecise estimates and the small number of direct trials lowered the confidence in several comparisons. In patients with severe AH, pentoxifylline and corticosteroids (alone and in combination with pentoxifylline or NAC) can reduce short-term mortality. No treatment decreases risk of medium-term mortality. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.

  7. Systematic review with meta-analysis: Comparative efficacy of biologics for induction and maintenance of mucosal healing in Crohn’s disease and ulcerative colitis controlled trials

    PubMed Central

    Cholapranee, Aurada; Hazlewood, Glen S; Kaplan, Gilaad G.; Peyrin-Biroulet, Laurent; Ananthakrishnan, Ashwin N

    2017-01-01

    Background Mucosal healing is an important therapeutic endpoint in the management of Crohn’s disease (CD) and ulcerative colitis (UC). Limited data exists regarding the comparative efficacy of various therapies in achieving this outcome. Methods We performed a systematic review and meta-analysis of randomized controlled trials (RCT) examining mucosal healing as an endpoint of immunosuppressives, anti-tumor necrosis factor α (anti-TNF) or anti-integrin monoclonal antibody therapy for moderate-to-severe CD or UC. Pooled effect sizes for induction and maintenance of mucosal healing were calculated and pair-wise treatment comparisons evaluated using a Bayesian network meta-analysis. Results A total of 12 RCTs were included in the meta-analysis (CD – 2 induction, 4 maintenance; UC – 8 induction, 5 maintenance). Duration of follow-up was 6–12 weeks for induction and 32–54 weeks for maintenance trials. In CD, anti-TNFs were more effective than placebo for maintaining mucosal healing (28% vs. 1%, Odds ratio (OR) 19.71, 95% confidence interval (CI) 3.51 – 110.84). In UC, anti-TNFs and anti-integrins were more effective than placebo for inducing (45% vs. 30%) and maintaining mucosal healing (33% vs. 18%). In network analysis, adalimumab therapy was inferior to infliximab (OR 0.45, 95% credible interval (CrI) 0.25 – 0.82) and combination infliximab-azathioprine (OR 0.32, 95% CrI 0.12 – 0.84) for inducing mucosal healing in UC. There was no statistically significant pairwise difference between vedolizumab and anti-TNF agents in UC. Conclusion Anti-TNF and anti-integrin biologic agents are effective in inducing mucosal healing in UC with adalimumab being inferior to infliximab or combination therapy. Infliximab and adalimumab were similar in CD. PMID:28326566

  8. Systematic review with network meta-analysis: the efficacy of anti-tumour necrosis factor-alpha agents for the treatment of ulcerative colitis.

    PubMed

    Stidham, R W; Lee, T C H; Higgins, P D R; Deshpande, A R; Sussman, D A; Singal, A G; Elmunzer, B J; Saini, S D; Vijan, S; Waljee, A K

    2014-04-01

    Antibodies against tumour necrosis factor-alpha (anti-TNF) are effective therapies in the treatment of ulcerative colitis (UC), but their comparative efficacy is unknown. To perform a network meta-analysis comparing the efficacy of anti-TNF agents in UC. After screening 506 studies, reviewers extracted information on seven studies. Traditional meta-analysis (TMA) was used to compare each anti-TNF agent to placebo. Bayesian network meta-analysis (NMA) was performed to compare the effects of anti-TNF agents to placebo. In addition, sample sizes for comparative efficacy trials were calculated. Compared to placebo, TMA revealed that anti-TNF agents result in a higher likelihood of induction of remission and response (RR: 2.45, 95% CI: 1.72-3.47 and RR: 1.65, 95% CI: 1.37-1.99 respectively) as well as maintenance of remission and response (RR: 2.00, 95% CI: 1.52-2.62 and RR: 1.76, 95% CI: 1.46-2.14 respectively). Individually, infliximab, adalimumab and goliumumab resulted in a higher likelihood of induction and maintenance for both remission and response. NMA found nonsignificant trends in comparisons of the individual agents. The required sample sizes for direct head-to-head trials between infliximab and adalimumab for induction and maintenance are 174 and 204 subjects respectively. This study demonstrates that, compared to placebo, infliximab, adalimumab and golimumab are all effective for the induction and maintenance of remission in ulcerative colitis. However, network meta-analysis demonstrates that no single agent is clinically superior to the others and therefore, other factors such as cost, safety, route of administration and patient preference should dictate our choice of anti-TNF agents. A randomised comparative efficacy trial between infliximab and adalimumab in UC is of practical size and should be performed. © 2014 John Wiley & Sons Ltd.

  9. Pathway analysis of high-throughput biological data within a Bayesian network framework.

    PubMed

    Isci, Senol; Ozturk, Cengizhan; Jones, Jon; Otu, Hasan H

    2011-06-15

    Most current approaches to high-throughput biological data (HTBD) analysis either perform individual gene/protein analysis or, gene/protein set enrichment analysis for a list of biologically relevant molecules. Bayesian Networks (BNs) capture linear and non-linear interactions, handle stochastic events accounting for noise, and focus on local interactions, which can be related to causal inference. Here, we describe for the first time an algorithm that models biological pathways as BNs and identifies pathways that best explain given HTBD by scoring fitness of each network. Proposed method takes into account the connectivity and relatedness between nodes of the pathway through factoring pathway topology in its model. Our simulations using synthetic data demonstrated robustness of our approach. We tested proposed method, Bayesian Pathway Analysis (BPA), on human microarray data regarding renal cell carcinoma (RCC) and compared our results with gene set enrichment analysis. BPA was able to find broader and more specific pathways related to RCC. Accompanying BPA software (BPAS) package is freely available for academic use at http://bumil.boun.edu.tr/bpa.

  10. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE PAGES

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  11. Relative efficacy and tolerability of etoricoxib, celecoxib, and naproxen in the treatment of osteoarthritis : A Bayesian network meta-analysis of randomized controlled trials based on patient withdrawal.

    PubMed

    Song, Gwan Gyu; Seo, Young Ho; Kim, Jae-Hoon; Choi, Sung Jae; Ji, Jong Dae; Lee, Young Ho

    2016-06-01

    This study aimed to assess the relative efficacy and tolerability of etoricoxib, celecoxib, and naproxen at recommended dosages in patients with osteoarthritis (OA). Randomized controlled trials (RCTs) examining the efficacy and tolerability of etoricoxib 30-60 mg, celecoxib 200-400 mg, and naproxen 1000 mg, based on the number of patient withdrawals among those with OA, were included in this network meta-analysis. We performed a Bayesian random-effects network meta-analysis to combine direct and indirect evidence from the RCTs. Eight RCTs, including 5,942 patients, met the inclusion criteria. The proportion of patient withdrawals due to lack of efficacy was significantly lower in the etoricoxib 30-60 mg (OR 0.21, 95 % CrI 0.12-0.38), celecoxib 200-400 mg (OR 0.29, 95 % CrI 0.18-0.47), and naproxen 1000 mg (OR 0.31, 95 % CrI 0.18-0.51) groups than in the placebo group. The number of patient withdrawals due to lack of efficacy tended to be lower in the etoricoxib 30-60 mg group than in the naproxen 1000 mg and celecoxib 200-400 mg groups, although they did not reach statistical significance (OR 0.68, 95 % CrI 0.36-1.33 and OR 0.70, 95 % CrI 0.38-1.37, respectively). Ranking probabilities based on the surface under the cumulative ranking curve (SUCRA) indicated that etoricoxib 30-60 mg had the highest probability of being the best treatment based on the number of withdrawals due to lack of efficacy (SUCRA = 0.9168) followed by celecoxib 200-400 mg (SUCRA = 0.5659), naproxen 1000 mg (SUCRA = 0.5171), and placebo (SUCRA = 0.000189). With respect to tolerability, the number of withdrawals due to adverse events was not significantly different among etoricoxib, celecoxib, naproxen, and placebo, although it tended to be lower with etoricoxib and placebo. Etoricoxib 30-60 mg, celecoxib 200-400 mg, and naproxen 1000 mg were more efficacious than placebo. However, there was no significant difference in efficacy and tolerability between the medications.

  12. Data Envelopment Analysis in the Presence of Measurement Error: Case Study from the National Database of Nursing Quality Indicators® (NDNQI®)

    PubMed Central

    Gajewski, Byron J.; Lee, Robert; Dunton, Nancy

    2012-01-01

    Data Envelopment Analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency (Hollingsworth, 2008), but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized (Gajewski, Lee, Bott, Piamjariyakul and Taunton, 2009; Ruggiero, 2004). We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® (NDNQI®) to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible. PMID:23328796

  13. Predicting the effect of maternal docosahexaenoic acid (DHA) supplementation to reduce early preterm birth in Australia and the United States using results of within country randomized controlled trials

    PubMed Central

    Yelland, LN; Gajewski, BJ; Colombo, J; Gibson, RA; Makrides, M; Carlson, SE

    2016-01-01

    SUMMARY The DHA to Optimize Mother Infant Outcome (DOMInO) and Kansas DHA Outcomes Study (KUDOS) were randomized controlled trials that supplemented mothers with 800 and 600 mg DHA/day, respectively, or a placebo during pregnancy. DOMInO was conducted in Australia and KUDOS in the United States. Both trials found an unanticipated and statistically significant reduction in early preterm birth (ePTB; i.e., birth before 34 weeks gestation). However, in each trial, the number of ePTBs were small. We used a novel Bayesian approach and an arbitrary sample of 120,000 pregnancies to estimate statistically derived low, moderate or high risk for ePTB, and to test for differences between the DHA and placebo groups. In both trials, the model predicted DHA would significantly reduce the expected proportion of deliveries in the high risk group under the trial conditions of the parent studies. From these proportions we estimated the number of ePTB that could be prevented. PMID:27637340

  14. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  15. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  16. Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education

    ERIC Educational Resources Information Center

    Schwalbe, Michelle Kristin

    2010-01-01

    This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…

  17. Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables

    ERIC Educational Resources Information Center

    Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan

    2017-01-01

    We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…

  18. IMPLICATIONS OF USING ROBUST BAYESIAN ANALYSIS TO REPRESENT DIVERSE SOURCES OF UNCERTAINTY IN INTEGRATED ASSESSMENT

    EPA Science Inventory

    In our previous research, we showed that robust Bayesian methods can be used in environmental modeling to define a set of probability distributions for key parameters that captures the effects of expert disagreement, ambiguity, or ignorance. This entire set can then be update...

  19. Pig Data and Bayesian Inference on Multinomial Probabilities

    ERIC Educational Resources Information Center

    Kern, John C.

    2006-01-01

    Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…

  20. Discontinuation, Efficacy, and Safety of Cholinesterase Inhibitors for Alzheimer’s Disease: a Meta-Analysis and Meta-Regression of 43 Randomized Clinical Trials Enrolling 16 106 Patients

    PubMed Central

    Blanco-Silvente, Lídia; Saez, Marc; Barceló, Maria Antònia; Garre-Olmo, Josep; Vilalta-Franch, Joan; Capellà, Dolors

    2017-01-01

    Abstract Background: We investigated the effect of cholinesterase inhibitors on all-cause discontinuation, efficacy and safety, and the effects of study design-, intervention-, and patient-related covariates on the risk-benefit of cholinesterase inhibitors for Alzheimer’s disease. Methods: A systematic review and meta-analysis of randomized placebo-controlled clinical trials comparing cholinesterase inhibitors and placebo was performed. The effect of covariates on study outcomes was analysed by means of meta-regression using a Bayesian framework. Results: Forty-three randomized placebo-controlled clinical trials involving 16106 patients were included. All-cause discontinuation was higher with cholinesterase inhibitors (OR = 1.66), as was discontinuation due to adverse events (OR=1.75). Cholinesterase inhibitors improved cognitive function (standardized mean difference = 0.38), global symptomatology (standardized mean difference=0.28) and functional capacity (standardized mean difference=0.16) but not neuropsychiatric symptoms. Rivastigmine was associated with a poorer outcome on all-cause discontinuation (Diff OR = 1.66) and donepezil with a higher efficacy on global change (Diff standardized mean difference = 0.41). The proportion of patients with serious adverse events decreased with age (Diff OR = -0.09). Mortality was lower with cholinesterase inhibitors than with placebo (OR = 0.65). Conclusion: While cholinesterase inhibitors show a poor risk-benefit relationship as indicated by mild symptom improvement and a higher than placebo all-cause discontinuation, a reduction of mortality was suggested. Intervention- and patient-related factors modify the effect of cholinesterase inhibitors in patients with Alzheimer’s disease. PMID:28201726

  1. Discontinuation, Efficacy, and Safety of Cholinesterase Inhibitors for Alzheimer's Disease: a Meta-Analysis and Meta-Regression of 43 Randomized Clinical Trials Enrolling 16 106 Patients.

    PubMed

    Blanco-Silvente, Lídia; Castells, Xavier; Saez, Marc; Barceló, Maria Antònia; Garre-Olmo, Josep; Vilalta-Franch, Joan; Capellà, Dolors

    2017-07-01

    We investigated the effect of cholinesterase inhibitors on all-cause discontinuation, efficacy and safety, and the effects of study design-, intervention-, and patient-related covariates on the risk-benefit of cholinesterase inhibitors for Alzheimer's disease. A systematic review and meta-analysis of randomized placebo-controlled clinical trials comparing cholinesterase inhibitors and placebo was performed. The effect of covariates on study outcomes was analysed by means of meta-regression using a Bayesian framework. Forty-three randomized placebo-controlled clinical trials involving 16106 patients were included. All-cause discontinuation was higher with cholinesterase inhibitors (OR = 1.66), as was discontinuation due to adverse events (OR=1.75). Cholinesterase inhibitors improved cognitive function (standardized mean difference = 0.38), global symptomatology (standardized mean difference=0.28) and functional capacity (standardized mean difference=0.16) but not neuropsychiatric symptoms. Rivastigmine was associated with a poorer outcome on all-cause discontinuation (Diff OR = 1.66) and donepezil with a higher efficacy on global change (Diff standardized mean difference = 0.41). The proportion of patients with serious adverse events decreased with age (Diff OR = -0.09). Mortality was lower with cholinesterase inhibitors than with placebo (OR = 0.65). While cholinesterase inhibitors show a poor risk-benefit relationship as indicated by mild symptom improvement and a higher than placebo all-cause discontinuation, a reduction of mortality was suggested. Intervention- and patient-related factors modify the effect of cholinesterase inhibitors in patients with Alzheimer's disease. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  2. The Importance of Considering Differences in Study Design in Network Meta-analysis: An Application Using Anti-Tumor Necrosis Factor Drugs for Ulcerative Colitis.

    PubMed

    Cameron, Chris; Ewara, Emmanuel; Wilson, Florence R; Varu, Abhishek; Dyrda, Peter; Hutton, Brian; Ingham, Michael

    2017-11-01

    Adaptive trial designs present a methodological challenge when performing network meta-analysis (NMA), as data from such adaptive trial designs differ from conventional parallel design randomized controlled trials (RCTs). We aim to illustrate the importance of considering study design when conducting an NMA. Three NMAs comparing anti-tumor necrosis factor drugs for ulcerative colitis were compared and the analyses replicated using Bayesian NMA. The NMA comprised 3 RCTs comparing 4 treatments (adalimumab 40 mg, golimumab 50 mg, golimumab 100 mg, infliximab 5 mg/kg) and placebo. We investigated the impact of incorporating differences in the study design among the 3 RCTs and presented 3 alternative methods on how to convert outcome data derived from one form of adaptive design to more conventional parallel RCTs. Combining RCT results without considering variations in study design resulted in effect estimates that were biased against golimumab. In contrast, using the 3 alternative methods to convert outcome data from one form of adaptive design to a format more consistent with conventional parallel RCTs facilitated more transparent consideration of differences in study design. This approach is more likely to yield appropriate estimates of comparative efficacy when conducting an NMA, which includes treatments that use an alternative study design. RCTs based on adaptive study designs should not be combined with traditional parallel RCT designs in NMA. We have presented potential approaches to convert data from one form of adaptive design to more conventional parallel RCTs to facilitate transparent and less-biased comparisons.

  3. Targeted agents for patients with advanced/metastatic pancreatic cancer: A protocol for systematic review and network meta-analysis.

    PubMed

    Di, Baoshan; Pan, Bei; Ge, Long; Ma, Jichun; Wu, Yiting; Guo, Tiankang

    2018-03-01

    Pancreatic cancer (PC) is a devastating malignant tumor. Although surgical resection may offer a good prognosis and prolong survival, approximately 80% patients with PC are always diagnosed as unresectable tumor. National Comprehensive Cancer Network's (NCCN) recommended gemcitabine-based chemotherapy as efficient treatment. While, according to recent studies, targeted agents might be a better available option for advanced or metastatic pancreatic cancer patients. The aim of this systematic review and network meta-analysis will be to examine the differences of different targeted interventions for advanced/metastatic PC patients. We will conduct this systematic review and network meta-analysis using Bayesian method and according to Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) statement. To identify relevant studies, 6 electronic databases including PubMed, EMBASE, the Cochrane Central Register of Controlled Trials (CENTRAL), Web of science, CNKI (Chinese National Knowledge Infrastructure), and CBM (Chinese Biological Medical Database) will be searched. The risk of bias in included randomized controlled trials (RCTs) will be assessed using the Cochrane Handbook version 5.1.0. And we will use GRADE approach to assess the quality of evidence from network meta-analysis. Data will be analyzed using R 3.4.1 software. To the best of our knowledge, this systematic review and network meta-analysis will firstly use both direct and indirect evidence to compare the differences of different targeted agents and targeted agents plus chemotherapy for advanced/metastatic pancreatic cancer patients. This is a protocol of systematic review and meta-analysis, so the ethical approval and patient consent are not required. We will disseminate the results of this review by submitting to a peer-reviewed journal.

  4. Accounting for treatment by center interaction in sample size determinations and the use of surrogate outcomes in the pessary for the prevention of preterm birth trial: a simulation study.

    PubMed

    Willan, Andrew R

    2016-07-05

    The Pessary for the Prevention of Preterm Birth Study (PS3) is an international, multicenter, randomized clinical trial designed to examine the effectiveness of the Arabin pessary in preventing preterm birth in pregnant women with a short cervix. During the design of the study two methodological issues regarding power and sample size were raised. Since treatment in the Standard Arm will vary between centers, it is anticipated that so too will the probability of preterm birth in that arm. This will likely result in a treatment by center interaction, and the issue of how this will affect the sample size requirements was raised. The sample size requirements to examine the effect of the pessary on the baby's clinical outcome was prohibitively high, so the second issue is how best to examine the effect on clinical outcome. The approaches taken to address these issues are presented. Simulation and sensitivity analysis were used to address the sample size issue. The probability of preterm birth in the Standard Arm was assumed to vary between centers following a Beta distribution with a mean of 0.3 and a coefficient of variation of 0.3. To address the second issue a Bayesian decision model is proposed that combines the information regarding the between-treatment difference in the probability of preterm birth from PS3 with the data from the Multiple Courses of Antenatal Corticosteroids for Preterm Birth Study that relate preterm birth and perinatal mortality/morbidity. The approach provides a between-treatment comparison with respect to the probability of a bad clinical outcome. The performance of the approach was assessed using simulation and sensitivity analysis. Accounting for a possible treatment by center interaction increased the sample size from 540 to 700 patients per arm for the base case. The sample size requirements increase with the coefficient of variation and decrease with the number of centers. Under the same assumptions used for determining the sample size requirements, the simulated mean probability that pessary reduces the risk of perinatal mortality/morbidity is 0.98. The simulated mean decreased with coefficient of variation and increased with the number of clinical sites. Employing simulation and sensitivity analysis is a useful approach for determining sample size requirements while accounting for the additional uncertainty due to a treatment by center interaction. Using a surrogate outcome in conjunction with a Bayesian decision model is an efficient way to compare important clinical outcomes in a randomized clinical trial in situations where the direct approach requires a prohibitively high sample size.

  5. Bayesian inference of the number of factors in gene-expression analysis: application to human virus challenge studies.

    PubMed

    Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence

    2010-11-09

    Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.

  6. Percutaneous vesicoamniotic shunting versus conservative management for fetal lower urinary tract obstruction (PLUTO): a randomised trial.

    PubMed

    Morris, Rachel K; Malin, Gemma L; Quinlan-Jones, Elisabeth; Middleton, Lee J; Hemming, Karla; Burke, Danielle; Daniels, Jane P; Khan, Khalid S; Deeks, Jon; Kilby, Mark D

    2013-11-02

    Fetal lower urinary tract obstruction (LUTO) is associated with high perinatal and long-term childhood mortality and morbidity. We aimed to assess the effectiveness of vesicoamniotic shunting for treatment of LUTO. In a randomised trial in the UK, Ireland, and the Netherlands, women whose pregnancies with a male fetus were complicated by isolated LUTO were randomly assigned by a central telephone and web-based randomisation service to receive either the intervention (placement of vesicoamniotic shunt) or conservative management. Allocation could not be masked from clinicians or participants because of the invasive nature of the intervention. Diagnosis was by prenatal ultrasound. The primary outcome was survival of the baby to 28 days postnatally. All primary analyses were done on an intention-to-treat basis, but these results were compared with those of an as-treated analysis to investigate the effect of a fairly large proportion of crossovers. We used Bayesian methods to estimate the posterior probability distribution of the effectiveness of vesicoamniotic shunting at 28 days. The study is registered with the ISRCTN Register, number ISRCTN53328556. 31 women with singleton pregnancies complicated by LUTO were included in the trial and main analysis, with 16 allocated to the vesicoamniotic shunt group and 15 to the conservative management group. The study closed early because of poor recruitment. There were 12 livebirths in each group. In the vesicoamniotic shunt group one intrauterine death occurred and three pregnancies were terminated. In the conservative management group one intrauterine death occurred and two pregnancies were terminated. Of the 16 pregnancies randomly assigned to vesicoamniotic shunting, eight neonates survived to 28 days, compared with four from the 15 pregnancies assigned to conservative management (intention-to-treat relative risk [RR] 1·88, 95% CI 0·71-4·96; p=0·27). Analysis based on treatment received showed a larger effect (3·20, 1·06-9·62; p=0·03). All 12 deaths were caused by pulmonary hypoplasia in the early neonatal period. Sensitivity analysis in which non-treatment-related terminations of pregnancy were excluded made some slight changes to point estimates only. Bayesian analysis in which the trial data were combined with elicited priors from experts suggested an 86% probability that vesicoamniotic shunting increased survival at 28 days and a 25% probability that it had a large, clinically important effect (defined as a relative increase of 55% or more in the proportion of neonates who survived). There was substantial short-term and long-term morbidity in both groups, including poor renal function-only two babies (both in the shunt group) survived to 2 years with normal renal function. Seven complications occurred in six fetuses from the shunt group, including spontaneous ruptured membranes, shunt blockage, and dislodgement. These complications resulted in four pregnancy losses. Survival seemed to be higher in the fetuses receiving vesicoamniotic shunting, but the size and direction of the effect remained uncertain, such that benefit could not be conclusively proven. Our results suggest that the chance of newborn babies surviving with normal renal function is very low irrespective of whether or not vesicoamniotic shunting is done. UK National Institute of Health Research, Wellbeing of Women, Hannah Eliza Guy Charity (Birmingham Children's Hospital Charity). Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Percutaneous vesicoamniotic shunting versus conservative management for fetal lower urinary tract obstruction (PLUTO): a randomised trial

    PubMed Central

    Morris, Rachel K; Malin, Gemma L; Quinlan-Jones, Elisabeth; Middleton, Lee J; Hemming, Karla; Burke, Danielle; Daniels, Jane P; Khan, Khalid S; Deeks, Jon; Kilby, Mark D

    2013-01-01

    Summary Background Fetal lower urinary tract obstruction (LUTO) is associated with high perinatal and long-term childhood mortality and morbidity. We aimed to assess the effectiveness of vesicoamniotic shunting for treatment of LUTO. Methods In a randomised trial in the UK, Ireland, and the Netherlands, women whose pregnancies with a male fetus were complicated by isolated LUTO were randomly assigned by a central telephone and web-based randomisation service to receive either the intervention (placement of vesicoamniotic shunt) or conservative management. Allocation could not be masked from clinicians or participants because of the invasive nature of the intervention. Diagnosis was by prenatal ultrasound. The primary outcome was survival of the baby to 28 days postnatally. All primary analyses were done on an intention-to-treat basis, but these results were compared with those of an as-treated analysis to investigate the effect of a fairly large proportion of crossovers. We used Bayesian methods to estimate the posterior probability distribution of the effectiveness of vesicoamniotic shunting at 28 days. The study is registered with the ISRCTN Register, number ISRCTN53328556. Findings 31 women with singleton pregnancies complicated by LUTO were included in the trial and main analysis, with 16 allocated to the vesicoamniotic shunt group and 15 to the conservative management group. The study closed early because of poor recruitment. There were 12 livebirths in each group. In the vesicoamniotic shunt group one intrauterine death occurred and three pregnancies were terminated. In the conservative management group one intrauterine death occurred and two pregnancies were terminated. Of the 16 pregnancies randomly assigned to vesicoamniotic shunting, eight neonates survived to 28 days, compared with four from the 15 pregnancies assigned to conservative management (intention-to-treat relative risk [RR] 1·88, 95% CI 0·71–4·96; p=0·27). Analysis based on treatment received showed a larger effect (3·20, 1·06–9·62; p=0·03). All 12 deaths were caused by pulmonary hypoplasia in the early neonatal period. Sensitivity analysis in which non-treatment-related terminations of pregnancy were excluded made some slight changes to point estimates only. Bayesian analysis in which the trial data were combined with elicited priors from experts suggested an 86% probability that vesicoamniotic shunting increased survival at 28 days and a 25% probability that it had a large, clinically important effect (defined as a relative increase of 55% or more in the proportion of neonates who survived). There was substantial short-term and long-term morbidity in both groups, including poor renal function—only two babies (both in the shunt group) survived to 2 years with normal renal function. Seven complications occurred in six fetuses from the shunt group, including spontaneous ruptured membranes, shunt blockage, and dislodgement. These complications resulted in four pregnancy losses. Interpretation Survival seemed to be higher in the fetuses receiving vesicoamniotic shunting, but the size and direction of the effect remained uncertain, such that benefit could not be conclusively proven. Our results suggest that the chance of newborn babies surviving with normal renal function is very low irrespective of whether or not vesicoamniotic shunting is done. Funding UK National Institute of Health Research, Wellbeing of Women, Hannah Eliza Guy Charity (Birmingham Children's Hospital Charity). PMID:23953766

  8. Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Cameron Russell; Mckigney, Edward Allen

    The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.

  9. CRAFT (complete reduction to amplitude frequency table)--robust and time-efficient Bayesian approach for quantitative mixture analysis by NMR.

    PubMed

    Krishnamurthy, Krish

    2013-12-01

    The intrinsic quantitative nature of NMR is increasingly exploited in areas ranging from complex mixture analysis (as in metabolomics and reaction monitoring) to quality assurance/control. Complex NMR spectra are more common than not, and therefore, extraction of quantitative information generally involves significant prior knowledge and/or operator interaction to characterize resonances of interest. Moreover, in most NMR-based metabolomic experiments, the signals from metabolites are normally present as a mixture of overlapping resonances, making quantification difficult. Time-domain Bayesian approaches have been reported to be better than conventional frequency-domain analysis at identifying subtle changes in signal amplitude. We discuss an approach that exploits Bayesian analysis to achieve a complete reduction to amplitude frequency table (CRAFT) in an automated and time-efficient fashion - thus converting the time-domain FID to a frequency-amplitude table. CRAFT uses a two-step approach to FID analysis. First, the FID is digitally filtered and downsampled to several sub FIDs, and secondly, these sub FIDs are then modeled as sums of decaying sinusoids using the Bayesian approach. CRAFT tables can be used for further data mining of quantitative information using fingerprint chemical shifts of compounds of interest and/or statistical analysis of modulation of chemical quantity in a biological study (metabolomics) or process study (reaction monitoring) or quality assurance/control. The basic principles behind this approach as well as results to evaluate the effectiveness of this approach in mixture analysis are presented. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  11. Bayesian hierarchical modeling for subject-level response classification in peptide microarray immunoassays

    PubMed Central

    Imholte, Gregory; Gottardo, Raphael

    2017-01-01

    Summary The peptide microarray immunoassay simultaneously screens sample serum against thousands of peptides, determining the presence of antibodies bound to array probes. Peptide microarrays tiling immunogenic regions of pathogens (e.g. envelope proteins of a virus) are an important high throughput tool for querying and mapping antibody binding. Because of the assay’s many steps, from probe synthesis to incubation, peptide microarray data can be noisy with extreme outliers. In addition, subjects may produce different antibody profiles in response to an identical vaccine stimulus or infection, due to variability among subjects’ immune systems. We present a robust Bayesian hierarchical model for peptide microarray experiments, pepBayes, to estimate the probability of antibody response for each subject/peptide combination. Heavy-tailed error distributions accommodate outliers and extreme responses, and tailored random effect terms automatically incorporate technical effects prevalent in the assay. We apply our model to two vaccine trial datasets to demonstrate model performance. Our approach enjoys high sensitivity and specificity when detecting vaccine induced antibody responses. A simulation study shows an adaptive thresholding classification method has appropriate false discovery rate control with high sensitivity, and receiver operating characteristics generated on vaccine trial data suggest that pepBayes clearly separates responses from non-responses. PMID:27061097

  12. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  13. Bayesian regression model for recurrent event data with event-varying covariate effects and event effect.

    PubMed

    Lin, Li-An; Luo, Sheng; Davis, Barry R

    2018-01-01

    In the course of hypertension, cardiovascular disease events (e.g., stroke, heart failure) occur frequently and recurrently. The scientific interest in such study may lie in the estimation of treatment effect while accounting for the correlation among event times. The correlation among recurrent event times come from two sources: subject-specific heterogeneity (e.g., varied lifestyles, genetic variations, and other unmeasurable effects) and event dependence (i.e., event incidences may change the risk of future recurrent events). Moreover, event incidences may change the disease progression so that there may exist event-varying covariate effects (the covariate effects may change after each event) and event effect (the effect of prior events on the future events). In this article, we propose a Bayesian regression model that not only accommodates correlation among recurrent events from both sources, but also explicitly characterizes the event-varying covariate effects and event effect. This model is especially useful in quantifying how the incidences of events change the effects of covariates and risk of future events. We compare the proposed model with several commonly used recurrent event models and apply our model to the motivating lipid-lowering trial (LLT) component of the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) (ALLHAT-LLT).

  14. Bayesian regression model for recurrent event data with event-varying covariate effects and event effect

    PubMed Central

    Lin, Li-An; Luo, Sheng; Davis, Barry R.

    2017-01-01

    In the course of hypertension, cardiovascular disease events (e.g., stroke, heart failure) occur frequently and recurrently. The scientific interest in such study may lie in the estimation of treatment effect while accounting for the correlation among event times. The correlation among recurrent event times come from two sources: subject-specific heterogeneity (e.g., varied lifestyles, genetic variations, and other unmeasurable effects) and event dependence (i.e., event incidences may change the risk of future recurrent events). Moreover, event incidences may change the disease progression so that there may exist event-varying covariate effects (the covariate effects may change after each event) and event effect (the effect of prior events on the future events). In this article, we propose a Bayesian regression model that not only accommodates correlation among recurrent events from both sources, but also explicitly characterizes the event-varying covariate effects and event effect. This model is especially useful in quantifying how the incidences of events change the effects of covariates and risk of future events. We compare the proposed model with several commonly used recurrent event models and apply our model to the motivating lipid-lowering trial (LLT) component of the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) (ALLHAT-LLT). PMID:29755162

  15. Bayesian coronal seismology

    NASA Astrophysics Data System (ADS)

    Arregui, Iñigo

    2018-01-01

    In contrast to the situation in a laboratory, the study of the solar atmosphere has to be pursued without direct access to the physical conditions of interest. Information is therefore incomplete and uncertain and inference methods need to be employed to diagnose the physical conditions and processes. One of such methods, solar atmospheric seismology, makes use of observed and theoretically predicted properties of waves to infer plasma and magnetic field properties. A recent development in solar atmospheric seismology consists in the use of inversion and model comparison methods based on Bayesian analysis. In this paper, the philosophy and methodology of Bayesian analysis are first explained. Then, we provide an account of what has been achieved so far from the application of these techniques to solar atmospheric seismology and a prospect of possible future extensions.

  16. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  17. A Bayesian Approach for Evaluation of Determinants of Health System Efficiency Using Stochastic Frontier Analysis and Beta Regression.

    PubMed

    Şenel, Talat; Cengiz, Mehmet Ali

    2016-01-01

    In today's world, Public expenditures on health are one of the most important issues for governments. These increased expenditures are putting pressure on public budgets. Therefore, health policy makers have focused on the performance of their health systems and many countries have introduced reforms to improve the performance of their health systems. This study investigates the most important determinants of healthcare efficiency for OECD countries using second stage approach for Bayesian Stochastic Frontier Analysis (BSFA). There are two steps in this study. First we measure 29 OECD countries' healthcare efficiency by BSFA using the data from the OECD Health Database. At second stage, we expose the multiple relationships between the healthcare efficiency and characteristics of healthcare systems across OECD countries using Bayesian beta regression.

  18. Aminoglycoside Therapy Manager: An Advanced Computer Program for Decision Support for Drug Dosing and Therapeutic Monitoring

    PubMed Central

    Lenert, Leslie; Lurie, Jon; Coleman, Robert; Klosterman, Heidrun; Blaschke, Terrence

    1990-01-01

    In this paper, we will describe an advanced drug dosing program, Aminoglycoside Therapy Manager that reasons using Bayesian pharmacokinetic modeling and symbolic modeling of patient status and drug response. Our design is similar to the design of the Digitalis Therapy Advisor program, but extends previous work by incorporating a Bayesian pharmacokinetic model, a “meta-level” analysis of drug concentrations to identify sampling errors and changes in pharmacokinetics, and including the results of the “meta-level” analysis in reasoning for dosing and therapeutic monitoring recommendations. The program is user friendly and runs on low cost general-purpose hardware. Validation studies show that the program is as accurate in predicting future drug concentrations as an expert using commercial Bayesian forecasting software.

  19. Applications of Bayesian spectrum representation in acoustics

    NASA Astrophysics Data System (ADS)

    Botts, Jonathan M.

    This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v

  20. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  1. The Stochastic Early Reaction, Inhibition, and late Action (SERIA) model for antisaccades

    PubMed Central

    2017-01-01

    The antisaccade task is a classic paradigm used to study the voluntary control of eye movements. It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction. Although several models have been proposed to explain error rates and reaction times in this task, no formal model comparison has yet been performed. Here, we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence. First, we provide a formal likelihood function of actions (pro- and antisaccades) and reaction times based on previously published models. Second, we introduce the Stochastic Early Reaction, Inhibition, and late Action model (SERIA), a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process. Third, we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials. Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process. Moreover, we show that the early decision process postulated by the SERIA model is, to a large extent, insensitive to the cue presented in a single trial. Finally, we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type (pro- or antisaccade) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades. PMID:28767650

  2. An efficient Bayesian meta-analysis approach for studying cross-phenotype genetic associations

    PubMed Central

    Majumdar, Arunabha; Haldar, Tanushree; Bhattacharya, Sourabh; Witte, John S.

    2018-01-01

    Simultaneous analysis of genetic associations with multiple phenotypes may reveal shared genetic susceptibility across traits (pleiotropy). For a locus exhibiting overall pleiotropy, it is important to identify which specific traits underlie this association. We propose a Bayesian meta-analysis approach (termed CPBayes) that uses summary-level data across multiple phenotypes to simultaneously measure the evidence of aggregate-level pleiotropic association and estimate an optimal subset of traits associated with the risk locus. This method uses a unified Bayesian statistical framework based on a spike and slab prior. CPBayes performs a fully Bayesian analysis by employing the Markov Chain Monte Carlo (MCMC) technique Gibbs sampling. It takes into account heterogeneity in the size and direction of the genetic effects across traits. It can be applied to both cohort data and separate studies of multiple traits having overlapping or non-overlapping subjects. Simulations show that CPBayes can produce higher accuracy in the selection of associated traits underlying a pleiotropic signal than the subset-based meta-analysis ASSET. We used CPBayes to undertake a genome-wide pleiotropic association study of 22 traits in the large Kaiser GERA cohort and detected six independent pleiotropic loci associated with at least two phenotypes. This includes a locus at chromosomal region 1q24.2 which exhibits an association simultaneously with the risk of five different diseases: Dermatophytosis, Hemorrhoids, Iron Deficiency, Osteoporosis and Peripheral Vascular Disease. We provide an R-package ‘CPBayes’ implementing the proposed method. PMID:29432419

  3. Probabilistic Common Spatial Patterns for Multichannel EEG Analysis

    PubMed Central

    Chen, Zhe; Gao, Xiaorong; Li, Yuanqing; Brown, Emery N.; Gao, Shangkai

    2015-01-01

    Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task. PMID:26005228

  4. The effects of various diets on glycemic outcomes during pregnancy: A systematic review and network meta-analysis.

    PubMed

    Ha, Vanessa; Bonner, Ashley J; Jadoo, Jaynendr K; Beyene, Joseph; Anand, Sonia S; de Souza, Russell J

    2017-01-01

    Evidence to support dietary modifications to improve glycemia during pregnancy is limited, and the benefits of diet beyond limiting gestational weight gain is unclear. Therefore, a systematic review and network meta-analysis of randomized trials was conducted to compare the effects of various common diets, stratified by the addition of gestational weight gain advice, on fasting glucose and insulin, hemoglobin A1c (HbA1c), and homeostatic model assessment for insulin resistance (HOMA-IR) in pregnant women. MEDLINE, EMBASE, Cochrane database, and reference lists of published studies were searched through April 2017. Randomized trials directly comparing two or more diets for ≥2-weeks were eligible. Bayesian network meta-analysis was performed for fasting glucose. Owing to a lack of similar dietary comparisons, a standard pairwise meta-analysis for the other glycemic outcomes was performed. The certainty of the pooled effect estimates was assessed using the GRADE tool. Twenty-one trials (1,865 participants) were included. In general, when given alongside gestational weight gain advice, fasting glucose improved in most diets compared to diets that gave gestational weight gain advice only. However, fasting glucose increased in high unsaturated or monounsaturated fatty acids diets. In the absence of gestational weight gain advice, fasting glucose improved in DASH-style diets compared to standard of care. Although most were non-significant, similar trends were observed for these same diets for the other glycemic outcomes. Dietary comparisons ranged from moderate to very low in quality of evidence. Alongside with gestational weight gain advice, most diets, with the exception of a high unsaturated or a high monounsaturated fatty acid diet, demonstrated a fasting glucose improvement compared with gestational weight gain advice only. When gestational weight gain advice was not given, the DASH-style diet appeared optimal on fasting glucose. However, a small number of trials were identified and most dietary comparisons were underpowered to detect differences in glycemic outcomes. Further studies that are high in quality and adequately powered are needed to confirm these findings. PROSPERO CRD42015026008.

  5. The effects of various diets on glycemic outcomes during pregnancy: A systematic review and network meta-analysis

    PubMed Central

    Ha, Vanessa; Bonner, Ashley J.; Jadoo, Jaynendr K.; Beyene, Joseph; Anand, Sonia S.

    2017-01-01

    Aims Evidence to support dietary modifications to improve glycemia during pregnancy is limited, and the benefits of diet beyond limiting gestational weight gain is unclear. Therefore, a systematic review and network meta-analysis of randomized trials was conducted to compare the effects of various common diets, stratified by the addition of gestational weight gain advice, on fasting glucose and insulin, hemoglobin A1c (HbA1c), and homeostatic model assessment for insulin resistance (HOMA-IR) in pregnant women. Methods MEDLINE, EMBASE, Cochrane database, and reference lists of published studies were searched through April 2017. Randomized trials directly comparing two or more diets for ≥2-weeks were eligible. Bayesian network meta-analysis was performed for fasting glucose. Owing to a lack of similar dietary comparisons, a standard pairwise meta-analysis for the other glycemic outcomes was performed. The certainty of the pooled effect estimates was assessed using the GRADE tool. Results Twenty-one trials (1,865 participants) were included. In general, when given alongside gestational weight gain advice, fasting glucose improved in most diets compared to diets that gave gestational weight gain advice only. However, fasting glucose increased in high unsaturated or monounsaturated fatty acids diets. In the absence of gestational weight gain advice, fasting glucose improved in DASH-style diets compared to standard of care. Although most were non-significant, similar trends were observed for these same diets for the other glycemic outcomes. Dietary comparisons ranged from moderate to very low in quality of evidence. Conclusion/Interpretation Alongside with gestational weight gain advice, most diets, with the exception of a high unsaturated or a high monounsaturated fatty acid diet, demonstrated a fasting glucose improvement compared with gestational weight gain advice only. When gestational weight gain advice was not given, the DASH-style diet appeared optimal on fasting glucose. However, a small number of trials were identified and most dietary comparisons were underpowered to detect differences in glycemic outcomes. Further studies that are high in quality and adequately powered are needed to confirm these findings. Registration PROSPERO CRD42015026008 PMID:28771519

  6. Surgical or Transcatheter Aortic-Valve Replacement in Intermediate-Risk Patients.

    PubMed

    Reardon, Michael J; Van Mieghem, Nicolas M; Popma, Jeffrey J; Kleiman, Neal S; Søndergaard, Lars; Mumtaz, Mubashir; Adams, David H; Deeb, G Michael; Maini, Brijeshwar; Gada, Hemal; Chetcuti, Stanley; Gleason, Thomas; Heiser, John; Lange, Rüdiger; Merhi, William; Oh, Jae K; Olsen, Peter S; Piazza, Nicolo; Williams, Mathew; Windecker, Stephan; Yakubov, Steven J; Grube, Eberhard; Makkar, Raj; Lee, Joon S; Conte, John; Vang, Eric; Nguyen, Hang; Chang, Yanping; Mugglin, Andrew S; Serruys, Patrick W J C; Kappetein, Arie P

    2017-04-06

    Although transcatheter aortic-valve replacement (TAVR) is an accepted alternative to surgery in patients with severe aortic stenosis who are at high surgical risk, less is known about comparative outcomes among patients with aortic stenosis who are at intermediate surgical risk. We evaluated the clinical outcomes in intermediate-risk patients with severe, symptomatic aortic stenosis in a randomized trial comparing TAVR (performed with the use of a self-expanding prosthesis) with surgical aortic-valve replacement. The primary end point was a composite of death from any cause or disabling stroke at 24 months in patients undergoing attempted aortic-valve replacement. We used Bayesian analytical methods (with a margin of 0.07) to evaluate the noninferiority of TAVR as compared with surgical valve replacement. A total of 1746 patients underwent randomization at 87 centers. Of these patients, 1660 underwent an attempted TAVR or surgical procedure. The mean (±SD) age of the patients was 79.8±6.2 years, and all were at intermediate risk for surgery (Society of Thoracic Surgeons Predicted Risk of Mortality, 4.5±1.6%). At 24 months, the estimated incidence of the primary end point was 12.6% in the TAVR group and 14.0% in the surgery group (95% credible interval [Bayesian analysis] for difference, -5.2 to 2.3%; posterior probability of noninferiority, >0.999). Surgery was associated with higher rates of acute kidney injury, atrial fibrillation, and transfusion requirements, whereas TAVR had higher rates of residual aortic regurgitation and need for pacemaker implantation. TAVR resulted in lower mean gradients and larger aortic-valve areas than surgery. Structural valve deterioration at 24 months did not occur in either group. TAVR was a noninferior alternative to surgery in patients with severe aortic stenosis at intermediate surgical risk, with a different pattern of adverse events associated with each procedure. (Funded by Medtronic; SURTAVI ClinicalTrials.gov number, NCT01586910 .).

  7. A predictive approach to selecting the size of a clinical trial, based on subjective clinical opinion.

    PubMed

    Spiegelhalter, D J; Freedman, L S

    1986-01-01

    The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.

  8. Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.

    PubMed

    Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay

    2018-04-17

    In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  9. A Bayesian Approach for Nonlinear Structural Equation Models with Dichotomous Variables Using Logit and Probit Links

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Cai, Jing-Heng

    2010-01-01

    Analysis of ordered binary and unordered binary data has received considerable attention in social and psychological research. This article introduces a Bayesian approach, which has several nice features in practical applications, for analyzing nonlinear structural equation models with dichotomous data. We demonstrate how to use the software…

  10. Bayesian Statistics in Educational Research: A Look at the Current State of Affairs

    ERIC Educational Resources Information Center

    König, Christoph; van de Schoot, Rens

    2018-01-01

    The ability of a scientific discipline to build cumulative knowledge depends on its predominant method of data analysis. A steady accumulation of knowledge requires approaches which allow researchers to consider results from comparable prior research. Bayesian statistics is especially relevant for establishing a cumulative scientific discipline,…

  11. Bayesian Finite Mixtures for Nonlinear Modeling of Educational Data.

    ERIC Educational Resources Information Center

    Tirri, Henry; And Others

    A Bayesian approach for finding latent classes in data is discussed. The approach uses finite mixture models to describe the underlying structure in the data and demonstrate that the possibility of using full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated…

  12. Bayesian Analysis of Structural Equation Models with Nonlinear Covariates and Latent Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2006-01-01

    In this article, we formulate a nonlinear structural equation model (SEM) that can accommodate covariates in the measurement equation and nonlinear terms of covariates and exogenous latent variables in the structural equation. The covariates can come from continuous or discrete distributions. A Bayesian approach is developed to analyze the…

  13. Evidence of major genes affecting stress response in rainbow trout using Bayesian methods of complex segregation analysis

    USDA-ARS?s Scientific Manuscript database

    As a first step towards the genetic mapping of quantitative trait loci (QTL) affecting stress response variation in rainbow trout, we performed complex segregation analyses (CSA) fitting mixed inheritance models of plasma cortisol using Bayesian methods in large full-sib families of rainbow trout. ...

  14. A Hierarchical Bayesian Procedure for Two-Mode Cluster Analysis

    ERIC Educational Resources Information Center

    DeSarbo, Wayne S.; Fong, Duncan K. H.; Liechty, John; Saxton, M. Kim

    2004-01-01

    This manuscript introduces a new Bayesian finite mixture methodology for the joint clustering of row and column stimuli/objects associated with two-mode asymmetric proximity, dominance, or profile data. That is, common clusters are derived which partition both the row and column stimuli/objects simultaneously into the same derived set of clusters.…

  15. Exact Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data.

    PubMed

    Lin, Yan; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett; Lipshultz, Steven

    2017-01-01

    Altham (Altham PME. Exact Bayesian analysis of a 2 × 2 contingency table, and Fisher's "exact" significance test. J R Stat Soc B 1969; 31: 261-269) showed that a one-sided p-value from Fisher's exact test of independence in a 2 × 2 contingency table is equal to the posterior probability of negative association in the 2 × 2 contingency table under a Bayesian analysis using an improper prior. We derive an extension of Fisher's exact test p-value in the presence of missing data, assuming the missing data mechanism is ignorable (i.e., missing at random or completely at random). Further, we propose Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data using alternative priors; we also present results from a simulation study exploring the Type I error rate and power of the proposed exact test p-values. An example, using data on the association between blood pressure and a cardiac enzyme, is presented to illustrate the methods.

  16. Albendazole and Corticosteroids for the Treatment of Solitary Cysticercus Granuloma: A Network Meta-analysis.

    PubMed

    Zhao, Bing-Cheng; Jiang, Hong-Ye; Ma, Wei-Ying; Jin, Da-Di; Li, Hao-Miao; Lu, Hai; Nakajima, Hideaki; Huang, Tong-Yi; Sun, Kai-Yu; Chen, Shu-Ling; Chen, Ke-Bing

    2016-02-01

    Solitary cysticercus granuloma (SCG) is the commonest form of neurocysticercosis in the Indian subcontinent and in travelers. Several different treatment options exist for SCG. We conducted a Bayesian network meta-analysis of randomized clinical trials (RCTs) to identify the best treatment option to prevent seizure recurrence and promote lesion resolution for patients with SCG. PubMed, EMBASE and the Cochrane Library databases (up to June 1, 2015) were searched for RCTs that compared any anthelmintics or corticosteroids, alone or in combination, with placebo or head to head and reported on seizure recurrence and lesion resolution in patients with SCG. A total of 14 RCTs (1277 patients) were included in the quantitative analysis focusing on four different treatment options. A Bayesian network model computing odds ratios (OR) with 95% credible intervals (CrI) and probability of being best (Pbest) was used to compare all interventions simultaneously. Albendazole and corticosteroids combination therapy was the only regimen that significantly decreased the risk of seizure recurrence compared with conservative treatment (OR 0.32, 95% CrI 0.10-0.93, Pbest 73.3%). Albendazole and corticosteroids alone or in combination were all efficacious in hastening granuloma resolution, but the combined therapy remained the best option based on probability analysis (OR 3.05, 95% CrI 1.24-7.95, Pbest 53.9%). The superiority of the combination therapy changed little in RCTs with different follow-up durations and in sensitivity analyses. The limitations of this study include high risk of bias and short follow-up duration in most studies. Dual therapy of albendazole and corticosteroids was the most efficacious regimen that could prevent seizure recurrence and promote lesion resolution in a follow-up period of around one year. It should be recommended for the management of SCG until more high-quality evidence is available.

  17. Albendazole and Corticosteroids for the Treatment of Solitary Cysticercus Granuloma: A Network Meta-analysis

    PubMed Central

    Nakajima, Hideaki; Huang, Tong-Yi; Sun, Kai-Yu; Chen, Shu-Ling; Chen, Ke-Bing

    2016-01-01

    Background Solitary cysticercus granuloma (SCG) is the commonest form of neurocysticercosis in the Indian subcontinent and in travelers. Several different treatment options exist for SCG. We conducted a Bayesian network meta-analysis of randomized clinical trials (RCTs) to identify the best treatment option to prevent seizure recurrence and promote lesion resolution for patients with SCG. Methods and Principal Findings PubMed, EMBASE and the Cochrane Library databases (up to June 1, 2015) were searched for RCTs that compared any anthelmintics or corticosteroids, alone or in combination, with placebo or head to head and reported on seizure recurrence and lesion resolution in patients with SCG. A total of 14 RCTs (1277 patients) were included in the quantitative analysis focusing on four different treatment options. A Bayesian network model computing odds ratios (OR) with 95% credible intervals (CrI) and probability of being best (Pbest) was used to compare all interventions simultaneously. Albendazole and corticosteroids combination therapy was the only regimen that significantly decreased the risk of seizure recurrence compared with conservative treatment (OR 0.32, 95% CrI 0.10–0.93, Pbest 73.3%). Albendazole and corticosteroids alone or in combination were all efficacious in hastening granuloma resolution, but the combined therapy remained the best option based on probability analysis (OR 3.05, 95% CrI 1.24–7.95, Pbest 53.9%). The superiority of the combination therapy changed little in RCTs with different follow-up durations and in sensitivity analyses. The limitations of this study include high risk of bias and short follow-up duration in most studies. Conclusions Dual therapy of albendazole and corticosteroids was the most efficacious regimen that could prevent seizure recurrence and promote lesion resolution in a follow-up period of around one year. It should be recommended for the management of SCG until more high-quality evidence is available. PMID:26849048

  18. Using phase II data for the analysis of phase III studies: An application in rare diseases.

    PubMed

    Wandel, Simon; Neuenschwander, Beat; Röver, Christian; Friede, Tim

    2017-06-01

    Clinical research and drug development in orphan diseases are challenging, since large-scale randomized studies are difficult to conduct. Formally synthesizing the evidence is therefore of great value, yet this is rarely done in the drug-approval process. Phase III designs that make better use of phase II data can facilitate drug development in orphan diseases. A Bayesian meta-analytic approach is used to inform the phase III study with phase II data. It is particularly attractive, since uncertainty of between-trial heterogeneity can be dealt with probabilistically, which is critical if the number of studies is small. Furthermore, it allows quantifying and discounting the phase II data through the predictive distribution relevant for phase III. A phase III design is proposed which uses the phase II data and considers approval based on a phase III interim analysis. The design is illustrated with a non-inferiority case study from a Food and Drug Administration approval in herpetic keratitis (an orphan disease). Design operating characteristics are compared to those of a traditional design, which ignores the phase II data. An analysis of the phase II data reveals good but insufficient evidence for non-inferiority, highlighting the need for a phase III study. For the phase III study supported by phase II data, the interim analysis is based on half of the patients. For this design, the meta-analytic interim results are conclusive and would justify approval. In contrast, based on the phase III data only, interim results are inconclusive and require further evidence. To accelerate drug development for orphan diseases, innovative study designs and appropriate methodology are needed. Taking advantage of randomized phase II data when analyzing phase III studies looks promising because the evidence from phase II supports informed decision-making. The implementation of the Bayesian design is straightforward with public software such as R.

  19. F157. HIERARCHICAL PREDICTION ERRORS DURING AUDITORY MISMATCH UNDER PHARMACOLOGICAL MANIPULATIONS: A COMPUTATIONAL SINGLE-TRIAL EEG ANALYSIS

    PubMed Central

    Weber, Lilian; Diaconescu, Andreea; Tomiello, Sara; Schöbi, Dario; Iglesias, Sandra; Mathys, Christoph; Haker, Helene; Stefanics, Gabor; Schmidt, André; Kometer, Michael; Vollenweider, Franz X; Stephan, Klaas Enno

    2018-01-01

    Abstract Background A central theme of contemporary neuroscience is the notion that the brain embodies a generative model of its sensory inputs to infer on the underlying environmental causes, and that it uses hierarchical prediction errors (PEs) to continuously update this model. In two pharmacological EEG studies, we investigate trial-wise hierarchical PEs during the auditory mismatch negativity (MMN), an electrophysiological response to unexpected events, which depends on NMDA-receptor mediated plasticity and has repeatedly been shown to be reduced in schizophrenia. Methods Study1: Reanalysis of 64 channel EEG data from a previously published MMN study (Schmidt et al., 2012) using a placebo-controlled, within-subject design (N=19) to examine the effect of S-ketamine. Study2: 64 channel EEG data recorded during MMN (between subjects, double-blind, placebo-controlled design, N=73), to examine the effects of amisulpride and biperiden. Using the Hierarchical Gaussian Filter, a Bayesian learning model, we extracted trial-by-trial PE estimates on two hierarchical levels. These served as regressors in a GLM of trial-wise EEG signals at the sensor level. Results We find strong correlations of EEG with both PEs in both samples: lower-level PEs show effects early on (Study1: 133ms post-stimulus, Study2: 177ms), higher-level PEs later (Study1: 240ms, Study2: 450ms). The temporal order of these signatures thus mimics the hierarchical relationship of the PEs, as proposed by our computational model, where lower level beliefs need to be updated before learning can ensue on higher levels. Ketamine significantly reduced the representation of the higher-level PE in Study1. (Study2 has not been unblinded.) Discussion These studies present first evidence for hierarchical PEs during MMN and demonstrate that single-trial analyses guided by a computational model can distinguish different types (levels) of PEs, which are differentially linked to neuromodulators of demonstrated relevance for schizophrenia. Our analysis approach thus provides better mechanistic interpretability of pharmacological MMN studies, which will hopefully support the development of computational assays for diagnosis and treatment predictions in schizophrenia.

  20. Comparative efficacy and acceptability of antidepressants, psychological interventions, and their combination for depressive disorder in children and adolescents: protocol for a network meta-analysis

    PubMed Central

    Zhou, Xinyu; Cipriani, Andrea; Zhang, Yuqing; Cuijpers, Pim; Hetrick, Sarah E; Weisz, John R; Pu, Juncai; Giovane, Cinzia Del; Furukawa, Toshiaki A; Barth, Jürgen; Coghill, David; Leucht, Stefan; Yang, Lining; Ravindran, Arun V; Xie, Peng

    2017-01-01

    Introduction Depressive disorder is common in children and adolescents, with important consequences and serious impairments in terms of personal and social functioning. While both pharmacological and psychological interventions have been shown to be effective, there is still uncertainty about the balance between these and what treatment strategy should be preferred in clinical practice. Therefore, we aim to compare and rank in a network meta-analysis (NMA) the commonly used psychological, pharmacological and combined interventions for depressive disorder in children and adolescents. Methods and analysis We will update the literature search of two previous NMAs for the identification of trials of antidepressant and psychotherapy alone for depressive disorder in children and adolescents. For identification of trials of combination interventions, seven databases (PubMed, EMBASE, CENTRAL (Cochrane Central Register of Controlled Trials), Web of Science, PsycINFO, CINAHL, LiLACS) will be searched from date of inception. We will also search ClinicalTrials.gov, the WHO International Clinical Trials Registry Platform and check relevant reports on the US Food and Drug Administration website for unpublished data. Building on our previous findings in the field, we will include any commonly prescribed oral antidepressants and any manualised or structured psychotherapies, as well as their combinations. Randomised controlled trials assessing any active intervention against active comparator or pill placebo/psychological controls in acute treatment for depressive disorder in children and adolescents will be included. The primary outcomes will be efficacy (mean change in depressive symptoms), and acceptability of treatment (dropout rate due to any cause). The secondary outcomes will be remission rate, tolerability of treatment (dropouts for adverse events), as well as suicide-related outcomes (suicidal behaviour or ideation). We will perform Bayesian NMAs for all relative outcome measures. Subgroup analyses and sensitivity analyses will be conducted to assess the robustness of the findings. Dissemination This NMA will provide the most up to date and clinically useful information about the comparative efficacy and acceptability of antidepressants, psychological intervention and their combination in the acute treatment of children and adolescents with depressive disorder. This is the newest NMA and therefore these results are very important in terms of evidence-based medicine. The results will be disseminated through peer-reviewed publication. Protocol registration PROSPERO CRD42015020841. PMID:28801423

  1. Comparative safety and efficacy of vasopressors for mortality in septic shock: A network meta-analysis.

    PubMed

    Nagendran, Myura; Maruthappu, Mahiben; Gordon, Anthony C; Gurusamy, Kurinchi S

    2016-05-01

    Septic shock is a life-threatening condition requiring vasopressor agents to support the circulatory system. Several agents exist with choice typically guided by the specific clinical scenario. We used a network meta-analysis approach to rate the comparative efficacy and safety of vasopressors for mortality and arrhythmia incidence in septic shock patients. We performed a comprehensive electronic database search including Medline, Embase, Science Citation Index Expanded and the Cochrane database. Randomised trials investigating vasopressor agents in septic shock patients and specifically assessing 28-day mortality or arrhythmia incidence were included. A Bayesian network meta-analysis was performed using Markov chain Monte Carlo methods. Thirteen trials of low to moderate risk of bias in which 3146 patients were randomised were included. There was no pairwise evidence to suggest one agent was superior over another for mortality. In the network meta-analysis, vasopressin was significantly superior to dopamine (OR 0.68 (95% CI 0.5 to 0.94)) for mortality. For arrhythmia incidence, standard pairwise meta-analyses confirmed that dopamine led to a higher incidence of arrhythmias than norepinephrine (OR 2.69 (95% CI 2.08 to 3.47)). In the network meta-analysis, there was no evidence of superiority of one agent over another. In this network meta-analysis, vasopressin was superior to dopamine for 28-day mortality in septic shock. Existing pairwise information supports the use of norepinephrine over dopamine. Our findings suggest that dopamine should be avoided in patients with septic shock and that other vasopressor agents should continue to be based on existing guidelines and clinical judgement of the specific presentation of the patient.

  2. Network meta-analysis of lorcaserin and oral hypoglycaemics for patients with type 2 diabetes mellitus and obesity.

    PubMed

    Neff, L M; Broder, M S; Beenhouwer, D; Chang, E; Papoyan, E; Wang, Z W

    2017-12-01

    In addition to weight loss, randomized controlled trials have shown improvement in glycaemic control in patients taking lorcaserin. The aim of this study aim was to compare adding lorcaserin or other glucose lowering medications to metformin on weight and glycaemic control. A systematic review and network meta-analysis of randomized controlled trials were conducted. Included studies (published 1990-2014) were of lorcaserin or glucose lowering medications in type 2 diabetic patients compared to placebo or different active treatments. Studies had to report ≥1 key outcome (change in weight or HbA1c, % HbA1c <7, hypoglycaemia). Direct meta-analysis was performed using DerSimonian and Laird random effects models, and network meta-analysis with Bayesian Markov-chain Monte Carlo random effects models; 6552 articles were screened and 41 included. Lorcaserin reduced weight significantly more than thiazolidinediones, glinides, sulphonylureas and dipeptidyl peptidase-4 inhibitors, some of which may have led to weight gain. There were no significant differences in weight change between lorcaserin and alpha-glucoside inhibitors, glucagon-like peptide-1 agonists and sodium/glucose cotransporter 2 inhibitors. Network meta-analysis showed lorcaserin was non-inferior to all other agents on HbA1c reduction and % achieving HbA1c of <7%. The risk of hypoglycaemia was not significantly different among studied agents except that sulphonylureas were associated with higher risk of hypoglycaemia than lorcaserin. Although additional studies are needed, this analysis suggests in a population of patients with a body mas index of ≥27 who do not achieve glycaemic control on a single agent, lorcaserin may be added as an alternative to an add-on glucose lowering medication. © 2017 World Obesity Federation.

  3. Exoplanet Biosignatures: Future Directions

    PubMed Central

    Bains, William; Cronin, Leroy; DasSarma, Shiladitya; Danielache, Sebastian; Domagal-Goldman, Shawn; Kacar, Betul; Kiang, Nancy Y.; Lenardic, Adrian; Reinhard, Christopher T.; Moore, William; Schwieterman, Edward W.; Shkolnik, Evgenya L.; Smith, Harrison B.

    2018-01-01

    Abstract We introduce a Bayesian method for guiding future directions for detection of life on exoplanets. We describe empirical and theoretical work necessary to place constraints on the relevant likelihoods, including those emerging from better understanding stellar environment, planetary climate and geophysics, geochemical cycling, the universalities of physics and chemistry, the contingencies of evolutionary history, the properties of life as an emergent complex system, and the mechanisms driving the emergence of life. We provide examples for how the Bayesian formalism could guide future search strategies, including determining observations to prioritize or deciding between targeted searches or larger lower resolution surveys to generate ensemble statistics and address how a Bayesian methodology could constrain the prior probability of life with or without a positive detection. Key Words: Exoplanets—Biosignatures—Life detection—Bayesian analysis. Astrobiology 18, 779–824. PMID:29938538

  4. Population forecasts for Bangladesh, using a Bayesian methodology.

    PubMed

    Mahsin, Md; Hossain, Syed Shahadat

    2012-12-01

    Population projection for many developing countries could be quite a challenging task for the demographers mostly due to lack of availability of enough reliable data. The objective of this paper is to present an overview of the existing methods for population forecasting and to propose an alternative based on the Bayesian statistics, combining the formality of inference. The analysis has been made using Markov Chain Monte Carlo (MCMC) technique for Bayesian methodology available with the software WinBUGS. Convergence diagnostic techniques available with the WinBUGS software have been applied to ensure the convergence of the chains necessary for the implementation of MCMC. The Bayesian approach allows for the use of observed data and expert judgements by means of appropriate priors, and a more realistic population forecasts, along with associated uncertainty, has been possible.

  5. Exoplanet Biosignatures: Future Directions.

    PubMed

    Walker, Sara I; Bains, William; Cronin, Leroy; DasSarma, Shiladitya; Danielache, Sebastian; Domagal-Goldman, Shawn; Kacar, Betul; Kiang, Nancy Y; Lenardic, Adrian; Reinhard, Christopher T; Moore, William; Schwieterman, Edward W; Shkolnik, Evgenya L; Smith, Harrison B

    2018-06-01

    We introduce a Bayesian method for guiding future directions for detection of life on exoplanets. We describe empirical and theoretical work necessary to place constraints on the relevant likelihoods, including those emerging from better understanding stellar environment, planetary climate and geophysics, geochemical cycling, the universalities of physics and chemistry, the contingencies of evolutionary history, the properties of life as an emergent complex system, and the mechanisms driving the emergence of life. We provide examples for how the Bayesian formalism could guide future search strategies, including determining observations to prioritize or deciding between targeted searches or larger lower resolution surveys to generate ensemble statistics and address how a Bayesian methodology could constrain the prior probability of life with or without a positive detection. Key Words: Exoplanets-Biosignatures-Life detection-Bayesian analysis. Astrobiology 18, 779-824.

  6. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    PubMed

    Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J

    2016-01-01

    The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  7. A Bayesian multi-stage cost-effectiveness design for animal studies in stroke research

    PubMed Central

    Cai, Chunyan; Ning, Jing; Huang, Xuelin

    2017-01-01

    Much progress has been made in the area of adaptive designs for clinical trials. However, little has been done regarding adaptive designs to identify optimal treatment strategies in animal studies. Motivated by an animal study of a novel strategy for treating strokes, we propose a Bayesian multi-stage cost-effectiveness design to simultaneously identify the optimal dose and determine the therapeutic treatment window for administrating the experimental agent. We consider a non-monotonic pattern for the dose-schedule-efficacy relationship and develop an adaptive shrinkage algorithm to assign more cohorts to admissible strategies. We conduct simulation studies to evaluate the performance of the proposed design by comparing it with two standard designs. These simulation studies show that the proposed design yields a significantly higher probability of selecting the optimal strategy, while it is generally more efficient and practical in terms of resource usage. PMID:27405325

  8. Prediction of Low-Thermal-Conductivity Compounds with First-Principles Anharmonic Lattice-Dynamics Calculations and Bayesian Optimization

    NASA Astrophysics Data System (ADS)

    Seko, Atsuto; Togo, Atsushi; Hayashi, Hiroyuki; Tsuda, Koji; Chaput, Laurent; Tanaka, Isao

    2015-11-01

    Compounds of low lattice thermal conductivity (LTC) are essential for seeking thermoelectric materials with high conversion efficiency. Some strategies have been used to decrease LTC. However, such trials have yielded successes only within a limited exploration space. Here, we report the virtual screening of a library containing 54 779 compounds. Our strategy is to search the library through Bayesian optimization using for the initial data the LTC obtained from first-principles anharmonic lattice-dynamics calculations for a set of 101 compounds. We discovered 221 materials with very low LTC. Two of them even have an electronic band gap <1 eV , which makes them exceptional candidates for thermoelectric applications. In addition to those newly discovered thermoelectric materials, the present strategy is believed to be powerful for many other applications in which the chemistry of materials is required to be optimized.

  9. Use of limited data to construct Bayesian networks for probabilistic risk assessment.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina M.; Swiler, Laura Painton

    2013-03-01

    Probabilistic Risk Assessment (PRA) is a fundamental part of safety/quality assurance for nuclear power and nuclear weapons. Traditional PRA very effectively models complex hardware system risks using binary probabilistic models. However, traditional PRA models are not flexible enough to accommodate non-binary soft-causal factors, such as digital instrumentation&control, passive components, aging, common cause failure, and human errors. Bayesian Networks offer the opportunity to incorporate these risks into the PRA framework. This report describes the results of an early career LDRD project titled %E2%80%9CUse of Limited Data to Construct Bayesian Networks for Probabilistic Risk Assessment%E2%80%9D. The goal of the work was tomore » establish the capability to develop Bayesian Networks from sparse data, and to demonstrate this capability by producing a data-informed Bayesian Network for use in Human Reliability Analysis (HRA) as part of nuclear power plant Probabilistic Risk Assessment (PRA). This report summarizes the research goal and major products of the research.« less

  10. Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Meegan, Charles A.

    1997-01-01

    This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.

  11. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    PubMed

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Bayesian switching factor analysis for estimating time-varying functional connectivity in fMRI.

    PubMed

    Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod

    2017-07-15

    There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal flexibility among the three networks. Our proposed methods provide a novel and powerful generative model for investigating dynamic brain connectivity. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Aripiprazole Lauroxil Compared with Paliperidone Palmitate in Patients with Schizophrenia: An Indirect Treatment Comparison.

    PubMed

    Cameron, Chris; Zummo, Jacqueline; Desai, Dharmik N; Drake, Christine; Hutton, Brian; Kotb, Ahmed; Weiden, Peter J

    Aripiprazole lauroxil (AL) is a long-acting injectable atypical antipsychotic recently approved for treatment of schizophrenia on the basis of a large-scale trial of two doses of AL versus placebo. There are no direct-comparison studies with paliperidone palmitate (PP; long-acting antipsychotic used most often in acute settings) for the acute psychotic episode. To indirectly compare efficacy and safety of the pivotal AL study with all PP studies meeting indirect comparison criteria. Systematic searches of MEDLINE, Embase, Cochrane CENTRAL, PsycINFO, ClinicalTrials.gov, International Clinical Trials Registry Platform, and gray literature were performed to identify randomized controlled trials of PP with similar designs to the AL trial. Bayesian network meta-analysis compared treatments with respect to symptom response and tolerability issues including weight gain, akathisia, parkinsonism, and likelihood of treatment-emergent adverse events. Three appropriate PP studies were identified for indirect comparison. Both doses of AL (441 mg and 882 mg monthly) were used and compared with two efficacious doses of PP (156 mg and 234 mg monthly). All four active-treatment conditions were associated with comparable reductions in acute symptoms (Positive and Negative Syndrome Scale) versus placebo and were of similar magnitude (range of mean difference -8.12 to -12.01, with overlapping 95% credible intervals). Between-group comparisons of active-treatment arms were associated with summary estimates of magnitude near 0. No clinically meaningful differences in selected safety or tolerability parameter incidence were found between active treatments. These results suggest that both AL and PP are effective for treatment of adults experiencing acute exacerbation of schizophrenia. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Clinical trials for drug registrations in Asian-Pacific countries: proposal for a new paradigm from a statistical perspective.

    PubMed

    Shih, W J

    2001-08-01

    The world has become more interdependent in the movement of free trade and global markets. The regulations for approval of new drugs in the Asian markets have always been an important issue in the free trade negotiation between the U.S.- and E.U.-based international manufacturers and the Asian-Pacific countries, since pharmaceuticals are of large trade value for them. In 1998 the University of Hong Kong and the Singapore National Medical Research Council jointly hosted the first Asian Clinical Trials Conference. The Society for Clinical Trials was invited as a collaborator for the event, which signified a milestone for interaction between the East and West in the discussion of clinical trials. Many have participated in the discussion of drug approval and registration issues for the Asian region based on the drug development experience in the United States. However, there are many interesting differences between the two regions, which lead to different approval processes for new drugs developed by the U.S.- and E.U.-based international manufacturers. This article highlights some regulatory dilemmas and some key statistical concepts pertinent to these differences. The purpose of this paper is to resolve the regional regulatory and scientific dilemma. A new paradigm of sample size design and data analysis for drug approval for countries in the Asian-Pacific region is proposed. The central premise is that substantial information from multicenter studies has already shown efficacy in the United States or the European Union when a drug manufacturer seeks marketing approval in an Asian country. This leads to the idea of a "consistency trial" using the method of Bayesian most plausible prediction. The method is illustrated with an example.

  15. Accelerating clinical development of HIV vaccine strategies: methodological challenges and considerations in constructing an optimised multi-arm phase I/II trial design.

    PubMed

    Richert, Laura; Doussau, Adélaïde; Lelièvre, Jean-Daniel; Arnold, Vincent; Rieux, Véronique; Bouakane, Amel; Lévy, Yves; Chêne, Geneviève; Thiébaut, Rodolphe

    2014-02-26

    Many candidate vaccine strategies against human immunodeficiency virus (HIV) infection are under study, but their clinical development is lengthy and iterative. To accelerate HIV vaccine development optimised trial designs are needed. We propose a randomised multi-arm phase I/II design for early stage development of several vaccine strategies, aiming at rapidly discarding those that are unsafe or non-immunogenic. We explored early stage designs to evaluate both the safety and the immunogenicity of four heterologous prime-boost HIV vaccine strategies in parallel. One of the vaccines used as a prime and boost in the different strategies (vaccine 1) has yet to be tested in humans, thus requiring a phase I safety evaluation. However, its toxicity risk is considered minimal based on data from similar vaccines. We newly adapted a randomised phase II trial by integrating an early safety decision rule, emulating that of a phase I study. We evaluated the operating characteristics of the proposed design in simulation studies with either a fixed-sample frequentist or a continuous Bayesian safety decision rule and projected timelines for the trial. We propose a randomised four-arm phase I/II design with two independent binary endpoints for safety and immunogenicity. Immunogenicity evaluation at trial end is based on a single-stage Fleming design per arm, comparing the observed proportion of responders in an immunogenicity screening assay to an unacceptably low proportion, without direct comparisons between arms. Randomisation limits heterogeneity in volunteer characteristics between arms. To avoid exposure of additional participants to an unsafe vaccine during the vaccine boost phase, an early safety decision rule is imposed on the arm starting with vaccine 1 injections. In simulations of the design with either decision rule, the risks of erroneous conclusions were controlled <15%. Flexibility in trial conduct is greater with the continuous Bayesian rule. A 12-month gain in timelines is expected by this optimised design. Other existing designs such as bivariate or seamless phase I/II designs did not offer a clear-cut alternative. By combining phase I and phase II evaluations in a multi-arm trial, the proposed optimised design allows for accelerating early stage clinical development of HIV vaccine strategies.

  16. Adaptive Randomization of Veliparib-Carboplatin Treatment in Breast Cancer.

    PubMed

    Rugo, Hope S; Olopade, Olufunmilayo I; DeMichele, Angela; Yau, Christina; van 't Veer, Laura J; Buxton, Meredith B; Hogarth, Michael; Hylton, Nola M; Paoloni, Melissa; Perlmutter, Jane; Symmans, W Fraser; Yee, Douglas; Chien, A Jo; Wallace, Anne M; Kaplan, Henry G; Boughey, Judy C; Haddad, Tufia C; Albain, Kathy S; Liu, Minetta C; Isaacs, Claudine; Khan, Qamar J; Lang, Julie E; Viscusi, Rebecca K; Pusztai, Lajos; Moulder, Stacy L; Chui, Stephen Y; Kemmer, Kathleen A; Elias, Anthony D; Edmiston, Kirsten K; Euhus, David M; Haley, Barbara B; Nanda, Rita; Northfelt, Donald W; Tripathy, Debasish; Wood, William C; Ewing, Cheryl; Schwab, Richard; Lyandres, Julia; Davis, Sarah E; Hirst, Gillian L; Sanil, Ashish; Berry, Donald A; Esserman, Laura J

    2016-07-07

    The genetic and clinical heterogeneity of breast cancer makes the identification of effective therapies challenging. We designed I-SPY 2, a phase 2, multicenter, adaptively randomized trial to screen multiple experimental regimens in combination with standard neoadjuvant chemotherapy for breast cancer. The goal is to match experimental regimens with responding cancer subtypes. We report results for veliparib, a poly(ADP-ribose) polymerase (PARP) inhibitor, combined with carboplatin. In this ongoing trial, women are eligible for participation if they have stage II or III breast cancer with a tumor 2.5 cm or larger in diameter; cancers are categorized into eight biomarker subtypes on the basis of status with regard to human epidermal growth factor receptor 2 (HER2), hormone receptors, and a 70-gene assay. Patients undergo adaptive randomization within each biomarker subtype to receive regimens that have better performance than the standard therapy. Regimens are evaluated within 10 biomarker signatures (i.e., prospectively defined combinations of biomarker subtypes). Veliparib-carboplatin plus standard therapy was considered for HER2-negative tumors and was therefore evaluated in 3 signatures. The primary end point is pathological complete response. Tumor volume changes measured by magnetic resonance imaging during treatment are used to predict whether a patient will have a pathological complete response. Regimens move on from phase 2 if and when they have a high Bayesian predictive probability of success in a subsequent phase 3 neoadjuvant trial within the biomarker signature in which they performed well. With regard to triple-negative breast cancer, veliparib-carboplatin had an 88% predicted probability of success in a phase 3 trial. A total of 72 patients were randomly assigned to receive veliparib-carboplatin, and 44 patients were concurrently assigned to receive control therapy; at the completion of chemotherapy, the estimated rates of pathological complete response in the triple-negative population were 51% (95% Bayesian probability interval [PI], 36 to 66%) in the veliparib-carboplatin group versus 26% (95% PI, 9 to 43%) in the control group. The toxicity of veliparib-carboplatin was greater than that of the control. The process used in our trial showed that veliparib-carboplatin added to standard therapy resulted in higher rates of pathological complete response than standard therapy alone specifically in triple-negative breast cancer. (Funded by the QuantumLeap Healthcare Collaborative and others; I-SPY 2 TRIAL ClinicalTrials.gov number, NCT01042379.).

  17. The Power Prior: Theory and Applications

    PubMed Central

    Ibrahim, Joseph G.; Chen, Ming-Hui; Gwon, Yeongjin; Chen, Fang

    2015-01-01

    The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A to Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Prequentist properties of power priors in posterior inference are established and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. PMID:26346180

  18. Comparative efficacy and acceptability of psychotherapies for acute anxiety disorders in children and adolescents: study protocol for a network meta-analysis

    PubMed Central

    Zhang, Yuqing; Zhou, Xinyu; James, Anthony C; Qin, Bin; Whittington, Craig J; Cuijpers, Pim; Del Giovane, Cinzia; Liu, Yiyun; Cohen, David; Weisz, John R; Xie, Peng

    2015-01-01

    Introduction Anxiety disorders are associated with significant public health burden in young individuals. Cognitive-behavioural therapy (CBT) is the most commonly used psychotherapy for anxiety disorders in children and adolescents, but previous reviews were hindered by a limited number of trials with direct comparisons between different psychotherapies and their deliveries. Consequently, the main aim of this research was to investigate the comparative efficacy and acceptability of various types and deliveries of psychotherapies for anxiety disorders in children and adolescents. Methods and analysis We will systematically search PubMed, EMBASE, Cochrane, Web of Science, PsycINFO, CINAHL, ProQuest Dissertations and LiLACS for randomised controlled trials, regardless of whether participants received blinding or not, published from 1 January 1966 to 30 January 2015 (updated to 1 July 2015), that compared any psychotherapy with either a control condition or an active comparator with different types and/or different delivery formats for the acute treatment of anxiety disorders in children and adolescents. Data extraction, risk of bias and quality assessments will be independently extracted by two reviewers. The primary outcome for efficacy will be mean overall change scores in anxiety symptoms (self-rated or assessor-rated) from baseline to post-treatment between two groups. The acceptability of treatment will be measured as the proportion of patients who discontinued treatment during the acute phase of treatment. We will assess efficacy, based on the standardised mean difference (SMD), and acceptability, based on the OR, using a random-effects network meta-analysis within a Bayesian framework. Subgroup and sensitivity analyses will be conducted to assess the robustness of the findings. Ethics and dissemination No ethical issues are foreseen. The results will be published in a peer-reviewed journal and will be disseminated electronically and in print. The meta-analysis may be updated to inform and guide management of anxiety in children and adolescents. Trial registration number PROSPERO CRD42015016283. PMID:26443658

  19. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  20. Classifying emotion in Twitter using Bayesian network

    NASA Astrophysics Data System (ADS)

    Surya Asriadie, Muhammad; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Language is used to express not only facts, but also emotions. Emotions are noticeable from behavior up to the social media statuses written by a person. Analysis of emotions in a text is done in a variety of media such as Twitter. This paper studies classification of emotions on twitter using Bayesian network because of its ability to model uncertainty and relationships between features. The result is two models based on Bayesian network which are Full Bayesian Network (FBN) and Bayesian Network with Mood Indicator (BNM). FBN is a massive Bayesian network where each word is treated as a node. The study shows the method used to train FBN is not very effective to create the best model and performs worse compared to Naive Bayes. F1-score for FBN is 53.71%, while for Naive Bayes is 54.07%. BNM is proposed as an alternative method which is based on the improvement of Multinomial Naive Bayes and has much lower computational complexity compared to FBN. Even though it’s not better compared to FBN, the resulting model successfully improves the performance of Multinomial Naive Bayes. F1-Score for Multinomial Naive Bayes model is 51.49%, while for BNM is 52.14%.

Top