Sample records for bootstrap confidence interval

  1. Bootstrapping Confidence Intervals for Robust Measures of Association.

    ERIC Educational Resources Information Center

    King, Jason E.

    A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…

  2. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  3. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    ERIC Educational Resources Information Center

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…

  4. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  5. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  6. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  7. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  8. Confidence limit calculation for antidotal potency ratio derived from lethal dose 50

    PubMed Central

    Manage, Ananda; Petrikovics, Ilona

    2013-01-01

    AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618

  9. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  10. Four Bootstrap Confidence Intervals for the Binomial-Error Model.

    ERIC Educational Resources Information Center

    Lin, Miao-Hsiang; Hsiung, Chao A.

    1992-01-01

    Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)

  11. Calculating Confidence Intervals for Regional Economic Impacts of Recreastion by Bootstrapping Visitor Expenditures

    Treesearch

    Donald B.K. English

    2000-01-01

    In this paper I use bootstrap procedures to develop confidence intervals for estimates of total industrial output generated per thousand tourist visits. Mean expenditures from replicated visitor expenditure data included weights to correct for response bias. Impacts were estimated with IMPLAN. Ninety percent interval endpoints were 6 to 16 percent above or below the...

  12. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…

  13. Accuracy assessment of percent canopy cover, cover type, and size class

    Treesearch

    H. T. Schreuder; S. Bain; R. C. Czaplewski

    2003-01-01

    Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...

  14. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  15. Efficient bootstrap estimates for tail statistics

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Aarnes, Ole Johan

    2017-03-01

    Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.

  16. What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum

    PubMed Central

    Hesterberg, Tim C.

    2015-01-01

    Bootstrapping has enormous potential in statistics education and practice, but there are subtle issues and ways to go wrong. For example, the common combination of nonparametric bootstrapping and bootstrap percentile confidence intervals is less accurate than using t-intervals for small samples, though more accurate for larger samples. My goals in this article are to provide a deeper understanding of bootstrap methods—how they work, when they work or not, and which methods work better—and to highlight pedagogical issues. Supplementary materials for this article are available online. [Received December 2014. Revised August 2015] PMID:27019512

  17. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  18. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  19. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  20. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    PubMed

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  2. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    USGS Publications Warehouse

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  3. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    ERIC Educational Resources Information Center

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  4. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    NASA Astrophysics Data System (ADS)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  5. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    PubMed

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  6. Bootstrap investigation of the stability of a Cox regression model.

    PubMed

    Altman, D G; Andersen, P K

    1989-07-01

    We describe a bootstrap investigation of the stability of a Cox proportional hazards regression model resulting from the analysis of a clinical trial of azathioprine versus placebo in patients with primary biliary cirrhosis. We have considered stability to refer both to the choice of variables included in the model and, more importantly, to the predictive ability of the model. In stepwise Cox regression analyses of 100 bootstrap samples using 17 candidate variables, the most frequently selected variables were those selected in the original analysis, and no other important variable was identified. Thus there was no reason to doubt the model obtained in the original analysis. For each patient in the trial, bootstrap confidence intervals were constructed for the estimated probability of surviving two years. It is shown graphically that these intervals are markedly wider than those obtained from the original model.

  7. A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn

    2006-01-01

    A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…

  8. An inferential study of the phenotype for the chromosome 15q24 microdeletion syndrome: a bootstrap analysis

    PubMed Central

    Ramírez-Prado, Dolores; Cortés, Ernesto; Aguilar-Segura, María Soledad; Gil-Guillén, Vicente Francisco

    2016-01-01

    In January 2012, a review of the cases of chromosome 15q24 microdeletion syndrome was published. However, this study did not include inferential statistics. The aims of the present study were to update the literature search and calculate confidence intervals for the prevalence of each phenotype using bootstrap methodology. Published case reports of patients with the syndrome that included detailed information about breakpoints and phenotype were sought and 36 were included. Deletions in megabase (Mb) pairs were determined to calculate the size of the interstitial deletion of the phenotypes studied in 2012. To determine confidence intervals for the prevalence of the phenotype and the interstitial loss, we used bootstrap methodology. Using the bootstrap percentiles method, we found wide variability in the prevalence of the different phenotypes (3–100%). The mean interstitial deletion size was 2.72 Mb (95% CI [2.35–3.10 Mb]). In comparison with our work, which expanded the literature search by 45 months, there were differences in the prevalence of 17% of the phenotypes, indicating that more studies are needed to analyze this rare disease. PMID:26925314

  9. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

  10. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  11. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  12. Reanalysis of cancer mortality in Japanese A-bomb survivors exposed to low doses of radiation: bootstrap and simulation methods

    PubMed Central

    2009-01-01

    Background The International Commission on Radiological Protection (ICRP) recommended annual occupational dose limit is 20 mSv. Cancer mortality in Japanese A-bomb survivors exposed to less than 20 mSv external radiation in 1945 was analysed previously, using a latency model with non-linear dose response. Questions were raised regarding statistical inference with this model. Methods Cancers with over 100 deaths in the 0 - 20 mSv subcohort of the 1950-1990 Life Span Study are analysed with Poisson regression models incorporating latency, allowing linear and non-linear dose response. Bootstrap percentile and Bias-corrected accelerated (BCa) methods and simulation of the Likelihood Ratio Test lead to Confidence Intervals for Excess Relative Risk (ERR) and tests against the linear model. Results The linear model shows significant large, positive values of ERR for liver and urinary cancers at latencies from 37 - 43 years. Dose response below 20 mSv is strongly non-linear at the optimal latencies for the stomach (11.89 years), liver (36.9), lung (13.6), leukaemia (23.66), and pancreas (11.86) and across broad latency ranges. Confidence Intervals for ERR are comparable using Bootstrap and Likelihood Ratio Test methods and BCa 95% Confidence Intervals are strictly positive across latency ranges for all 5 cancers. Similar risk estimates for 10 mSv (lagged dose) are obtained from the 0 - 20 mSv and 5 - 500 mSv data for the stomach, liver, lung and leukaemia. Dose response for the latter 3 cancers is significantly non-linear in the 5 - 500 mSv range. Conclusion Liver and urinary cancer mortality risk is significantly raised using a latency model with linear dose response. A non-linear model is strongly superior for the stomach, liver, lung, pancreas and leukaemia. Bootstrap and Likelihood-based confidence intervals are broadly comparable and ERR is strictly positive by bootstrap methods for all 5 cancers. Except for the pancreas, similar estimates of latency and risk from 10 mSv are obtained from the 0 - 20 mSv and 5 - 500 mSv subcohorts. Large and significant cancer risks for Japanese survivors exposed to less than 20 mSv external radiation from the atomic bombs in 1945 cast doubt on the ICRP recommended annual occupational dose limit. PMID:20003238

  13. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  14. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables.

    PubMed

    Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter

    2011-04-13

    The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  15. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  17. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    PubMed

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  18. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  19. Facebook and Twitter vaccine sentiment in response to measles outbreaks.

    PubMed

    Deiner, Michael S; Fathy, Cherie; Kim, Jessica; Niemeyer, Katherine; Ramirez, David; Ackley, Sarah F; Liu, Fengchen; Lietman, Thomas M; Porco, Travis C

    2017-11-01

    Social media posts regarding measles vaccination were classified as pro-vaccination, expressing vaccine hesitancy, uncertain, or irrelevant. Spearman correlations with Centers for Disease Control and Prevention-reported measles cases and differenced smoothed cumulative case counts over this period were reported (using time series bootstrap confidence intervals). A total of 58,078 Facebook posts and 82,993 tweets were identified from 4 January 2009 to 27 August 2016. Pro-vaccination posts were correlated with the US weekly reported cases (Facebook: Spearman correlation 0.22 (95% confidence interval: 0.09 to 0.34), Twitter: 0.21 (95% confidence interval: 0.06 to 0.34)). Vaccine-hesitant posts, however, were uncorrelated with measles cases in the United States (Facebook: 0.01 (95% confidence interval: -0.13 to 0.14), Twitter: 0.0011 (95% confidence interval: -0.12 to 0.12)). These findings may result from more consistent social media engagement by individuals expressing vaccine hesitancy, contrasted with media- or event-driven episodic interest on the part of individuals favoring current policy.

  20. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  1. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  2. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  3. The Role of Simulation Approaches in Statistics

    ERIC Educational Resources Information Center

    Wood, Michael

    2005-01-01

    This article explores the uses of a simulation model (the two bucket story)--implemented by a stand-alone computer program, or an Excel workbook (both on the web)--that can be used for deriving bootstrap confidence intervals, and simulating various probability distributions. The strengths of the model are its generality, the fact that it provides…

  4. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.

  5. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  6. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    PubMed

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  7. Return on Investment of a Work-Family Intervention: Evidence From the Work, Family, and Health Network.

    PubMed

    Barbosa, Carolina; Bray, Jeremy W; Dowd, William N; Mills, Michael J; Moen, Phyllis; Wipfli, Brad; Olson, Ryan; Kelly, Erin L

    2015-09-01

    To estimate the return on investment (ROI) of a workplace initiative to reduce work-family conflict in a group-randomized 18-month field experiment in an information technology firm in the United States. Intervention resources were micro-costed; benefits included medical costs, productivity (presenteeism), and turnover. Regression models were used to estimate the ROI, and cluster-robust bootstrap was used to calculate its confidence interval. For each participant, model-adjusted costs of the intervention were $690 and company savings were $1850 (2011 prices). The ROI was 1.68 (95% confidence interval, -8.85 to 9.47) and was robust in sensitivity analyses. The positive ROI indicates that employers' investment in an intervention to reduce work-family conflict can enhance their business. Although this was the first study to present a confidence interval for the ROI, results are comparable with the literature.

  8. Landslide susceptibility near highways is increased by one order of magnitude in the Andes of southern Ecuador, Loja province

    NASA Astrophysics Data System (ADS)

    Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.

    2014-03-01

    Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic and geological predictors as possible confounders. A spatial block bootstrap was used to obtain non-parametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21 with lower 95% confidence bounds > 13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than one order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.

  9. Landslide susceptibility near highways is increased by 1 order of magnitude in the Andes of southern Ecuador, Loja province

    NASA Astrophysics Data System (ADS)

    Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.

    2015-01-01

    Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic, and geological predictors as possible confounders. A spatial block bootstrap was used to obtain nonparametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21, with lower 95% confidence bounds >13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than 1 order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.

  10. Impact of Sampling Density on the Extent of HIV Clustering

    PubMed Central

    Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor

    2014-01-01

    Abstract Identifying and monitoring HIV clusters could be useful in tracking the leading edge of HIV transmission in epidemics. Currently, greater specificity in the definition of HIV clusters is needed to reduce confusion in the interpretation of HIV clustering results. We address sampling density as one of the key aspects of HIV cluster analysis. The proportion of viral sequences in clusters was estimated at sampling densities from 1.0% to 70%. A set of 1,248 HIV-1C env gp120 V1C5 sequences from a single community in Botswana was utilized in simulation studies. Matching numbers of HIV-1C V1C5 sequences from the LANL HIV Database were used as comparators. HIV clusters were identified by phylogenetic inference under bootstrapped maximum likelihood and pairwise distance cut-offs. Sampling density below 10% was associated with stochastic HIV clustering with broad confidence intervals. HIV clustering increased linearly at sampling density >10%, and was accompanied by narrowing confidence intervals. Patterns of HIV clustering were similar at bootstrap thresholds 0.7 to 1.0, but the extent of HIV clustering decreased with higher bootstrap thresholds. The origin of sampling (local concentrated vs. scattered global) had a substantial impact on HIV clustering at sampling densities ≥10%. Pairwise distances at 10% were estimated as a threshold for cluster analysis of HIV-1 V1C5 sequences. The node bootstrap support distribution provided additional evidence for 10% sampling density as the threshold for HIV cluster analysis. The detectability of HIV clusters is substantially affected by sampling density. A minimal genotyping density of 10% and sampling density of 50–70% are suggested for HIV-1 V1C5 cluster analysis. PMID:25275430

  11. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  12. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  13. Examining Differential Item Functioning: IRT-Based Detection in the Framework of Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    2017-01-01

    This article offers an approach to examining differential item functioning (DIF) under its item response theory (IRT) treatment in the framework of confirmatory factor analysis (CFA). The approach is based on integrating IRT- and CFA-based testing of DIF and using bias-corrected bootstrap confidence intervals with a syntax code in Mplus.

  14. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  15. Power in Bayesian Mediation Analysis for Small Sample Research

    PubMed Central

    Miočević, Milica; MacKinnon, David P.; Levy, Roy

    2018-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296

  16. Power in Bayesian Mediation Analysis for Small Sample Research.

    PubMed

    Miočević, Milica; MacKinnon, David P; Levy, Roy

    2017-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.

  17. Standard Errors and Confidence Intervals from Bootstrapping for Ramsay-Curve Item Response Theory Model Item Parameters

    ERIC Educational Resources Information Center

    Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.

    2011-01-01

    Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…

  18. Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival.

    PubMed

    Ishwaran, Hemant; Lu, Min

    2018-06-04

    Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Effect-site concentration of propofol target-controlled infusion at loss of consciousness in intractable epilepsy patients receiving long-term antiepileptic drug therapy.

    PubMed

    Choi, Eun Mi; Choi, Seung Ho; Lee, Min Huiy; Ha, Sang Hee; Min, Kyeong Tae

    2011-07-01

    Propofol dose requirement for loss of consciousness (LOC) in epilepsy patients would be probably affected by increasing factors [development of tolerance, up-regulated γ-aminobutyric acid (GABAA) receptors, or antiepileptic activity of propofol] and reducing factors [synergistic interaction between propofol and antiepileptic drugs (AEDs) or reduced neuronal mass in cortex] in complex and counteracting ways. Therefore, we determined the effect-site concentration (Ce) of propofol for LOC in intractable epilepsy patients receiving chronic AEDs in comparison with non-epilepsy patients. Nineteen epilepsy patients receiving long-term AEDs therapy and 20 non-epilepsy patients, with the age of 20 to 65 years, were enrolled. The epilepsy patients took their prescribed AEDs until the morning of the operation. Ce of propofol for LOC was determined with isotonic regression method with bootstrapping approach following Dixon's up-and-down allocation. The study was carried out before surgical stimulation. Isotonic regression showed that estimated Ce50 and Ce95 of propofol for LOC were lower in epilepsy group [2.88 μg/mL (83% confidence interval, 2.82-3.13 μg/mL) and [3.43 μg/mL (95% confidence interval, 3.28-3.47 μg/mL)] than in non-epilepsy group [3.38 μg/mL (83% confidence interval, 3.17-3.63 μg/mL) and 3.92 μg/mL (95% confidence interval, 3.80-3.97 μg/mL)] with bootstrapping approach. Mean Ce50 of propofol of epilepsy group was also lower than that of non-epilepsy group without statistical significance (2.8240.19 μg/mL vs 3.16±0.38 μg/mL, P=0.056). For anesthetic induction of epilepsy patients with propofol target-controlled infusion, Ce may need to be reduced by 10% to 15% compared with non-epilepsy patients.

  20. Using the Bootstrap Method to Evaluate the Critical Range of Misfit for Polytomous Rasch Fit Statistics.

    PubMed

    Seol, Hyunsoo

    2016-06-01

    The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.

  1. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  2. Bootstrap confidence levels for phylogenetic trees.

    PubMed

    Efron, B; Halloran, E; Holmes, S

    1996-07-09

    Evolutionary trees are often estimated from DNA or RNA sequence data. How much confidence should we have in the estimated trees? In 1985, Felsenstein [Felsenstein, J. (1985) Evolution 39, 783-791] suggested the use of the bootstrap to answer this question. Felsenstein's method, which in concept is a straightforward application of the bootstrap, is widely used, but has been criticized as biased in the genetics literature. This paper concerns the use of the bootstrap in the tree problem. We show that Felsenstein's method is not biased, but that it can be corrected to better agree with standard ideas of confidence levels and hypothesis testing. These corrections can be made by using the more elaborate bootstrap method presented here, at the expense of considerably more computation.

  3. Alternative methods to evaluate trial level surrogacy.

    PubMed

    Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert

    2008-01-01

    The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.

  4. CREATION OF A MODEL TO PREDICT SURVIVAL IN PATIENTS WITH REFRACTORY COELIAC DISEASE USING A MULTINATIONAL REGISTRY

    PubMed Central

    Rubio-Tapia, Alberto; Malamut, Georgia; Verbeek, Wieke H.M.; van Wanrooij, Roy L.J.; Leffler, Daniel A.; Niveloni, Sonia I.; Arguelles-Grande, Carolina; Lahr, Brian D.; Zinsmeister, Alan R.; Murray, Joseph A.; Kelly, Ciaran P.; Bai, Julio C.; Green, Peter H.; Daum, Severin; Mulder, Chris J.J.; Cellier, Christophe

    2016-01-01

    Background Refractory coeliac disease is a severe complication of coeliac disease with heterogeneous outcome. Aim To create a prognostic model to estimate survival of patients with refractory coeliac disease. Methods We evaluated predictors of 5-year mortality using Cox proportional hazards regression on subjects from a multinational registry. Bootstrap re-sampling was used to internally validate the individual factors and overall model performance. The mean of the estimated regression coefficients from 400 bootstrap models was used to derive a risk score for 5-year mortality. Results The multinational cohort was composed of 232 patients diagnosed with refractory coeliac disease across 7 centers (range of 11–63 cases per center). The median age was 53 years and 150 (64%) were women. A total of 51 subjects died during 5-year follow-up (cumulative 5-year all-cause mortality = 30%). From a multiple variable Cox proportional hazards model, the following variables were significantly associated with 5-year mortality: age at refractory coeliac disease diagnosis (per 20 year increase, hazard ratio = 2.21; 95% confidence interval: 1.38, 3.55), abnormal intraepithelial lymphocytes (hazard ratio = 2.85; 95% confidence interval: 1.22, 6.62), and albumin (per 0.5 unit increase, hazard ratio = 0.72; 95% confidence interval: 0.61, 0.85). A simple weighted 3-factor risk score was created to estimate 5-year survival. Conclusions Using data from a multinational registry and previously-reported risk factors, we create a prognostic model to predict 5-year mortality among patients with refractory coeliac disease. This new model may help clinicians to guide treatment and follow-up. PMID:27485029

  5. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    PubMed

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

  6. The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.

    PubMed

    Rodgers, J L

    1999-10-01

    A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.

  7. Acculturation, Income and Vegetable Consumption Behaviors Among Latino Adults in the U.S.: A Mediation Analysis with the Bootstrapping Technique.

    PubMed

    López, Erick B; Yamashita, Takashi

    2017-02-01

    This study examined whether household income mediates the relationship between acculturation and vegetable consumption among Latino adults in the U.S. Data from the 2009 to 2010 National Health and Nutrition Examination Survey were analyzed. Vegetable consumption index was created based on the frequencies of five kinds of vegetables intake. Acculturation was measured with the degree of English language use at home. Path model with bootstrapping technique was employed for mediation analysis. A significant partial mediation relationship was identified. Greater acculturation [95 % bias corrected bootstrap confident interval (BCBCI) = (0.02, 0.33)] was associated with the higher income and in turn, greater vegetable consumption. At the same time, greater acculturation was associated with lower vegetable consumption [95 % BCBCI = (-0.88, -0.07)]. Findings regarding the income as a mediator of the acculturation-dietary behavior relationship inform unique intervention programs and policy changes to address health disparities by race/ethnicity.

  8. Exploration of the factor structure of the Kirton Adaption-Innovation Inventory using bootstrapping estimation.

    PubMed

    Im, Subin; Min, Soonhong

    2013-04-01

    Exploratory factor analyses of the Kirton Adaption-Innovation Inventory (KAI), which serves to measure individual cognitive styles, generally indicate three factors: sufficiency of originality, efficiency, and rule/group conformity. In contrast, a 2005 study by Im and Hu using confirmatory factor analysis supported a four-factor structure, dividing the sufficiency of originality dimension into two subdimensions, idea generation and preference for change. This study extends Im and Hu's (2005) study of a derived version of the KAI by providing additional evidence of the four-factor structure. Specifically, the authors test the robustness of the parameter estimates to the violation of normality assumptions in the sample using bootstrap methods. A bias-corrected confidence interval bootstrapping procedure conducted among a sample of 356 participants--members of the Arkansas Household Research Panel, with middle SES and average age of 55.6 yr. (SD = 13.9)--showed that the four-factor model with two subdimensions of sufficiency of originality fits the data significantly better than the three-factor model in non-normality conditions.

  9. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  11. Determination of Time Dependent Virus Inactivation Rates

    NASA Astrophysics Data System (ADS)

    Chrysikopoulos, C. V.; Vogler, E. T.

    2003-12-01

    A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.

  12. Understanding and Targeting Epigenetic Alterations in Acquired Bone Marrow Failure

    DTIC Science & Technology

    2015-05-01

    Cre 1 2 P95H 3 Activated allele Neo Lox P Frt Long homology arm Short homology arm Neo cassette Probe primer P95H (GGC => GTG ) Reference: 297...right) (th indicates 95% confidence interval by bootstrapping. The schematic illustrates a p left to right, the features are the upstream exon ( gray box...and intron (black line), t (black line) and exon ( gray box). Horizontal axis, genomic coordinates defined with relative frequency of the indicated

  13. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  14. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  15. Using Replicates in Information Retrieval Evaluation.

    PubMed

    Voorhees, Ellen M; Samarov, Daniel; Soboroff, Ian

    2017-09-01

    This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions-something not possible without replicates-yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness.

  16. Using Replicates in Information Retrieval Evaluation

    PubMed Central

    VOORHEES, ELLEN M.; SAMAROV, DANIEL; SOBOROFF, IAN

    2018-01-01

    This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions—something not possible without replicates—yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness. PMID:29905334

  17. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaffer, Richard, E-mail: rickyshaffer@yahoo.co.u; Department of Clinical Oncology, Imperial College London National Health Service Trust, London; Pickles, Tom

    Purpose: Prior studies have derived low values of alpha-beta ratio (a/ss) for prostate cancer of approximately 1-2 Gy. These studies used poorly matched groups, differing definitions of biochemical failure, and insufficient follow-up. Methods and Materials: National Comprehensive Cancer Network low- or low-intermediate risk prostate cancer patients, treated with external beam radiotherapy or permanent prostate brachytherapy, were matched for prostate-specific antigen, Gleason score, T-stage, percentage of positive cores, androgen deprivation therapy, and era, yielding 118 patient pairs. The Phoenix definition of biochemical failure was used. The best-fitting value for a/ss was found for up to 90-month follow-up using maximum likelihood analysis,more » and the 95% confidence interval using the profile likelihood method. Linear quadratic formalism was applied with the radiobiological parameters of relative biological effectiveness = 1.0, potential doubling time = 45 days, and repair half-time = 1 hour. Bootstrap analysis was performed to estimate uncertainties in outcomes, and hence in a/ss. Sensitivity analysis was performed by varying the values of the radiobiological parameters to extreme values. Results: The value of a/ss best fitting the outcomes data was >30 Gy, with lower 95% confidence limit of 5.2 Gy. This was confirmed on bootstrap analysis. Varying parameters to extreme values still yielded best-fit a/ss of >30 Gy, although the lower 95% confidence interval limit was reduced to 0.6 Gy. Conclusions: Using carefully matched groups, long follow-up, the Phoenix definition of biochemical failure, and well-established statistical methods, the best estimate of a/ss for low and low-tier intermediate-risk prostate cancer is likely to be higher than that of normal tissues, although a low value cannot be excluded.« less

  19. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy: not all AIDS-defining conditions are created equal.

    PubMed

    Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias; May, Margaret; Grabar, Sophie; Furrer, Hansjakob; Sabin, Caroline; Fatkenheuer, Gerd; Justice, Amy; Reiss, Peter; d'Arminio Monforte, Antonella; Gill, John; Hogg, Robert; Bonnet, Fabrice; Kitahata, Mari; Staszewski, Schlomo; Casabona, Jordi; Harris, Ross; Saag, Michael

    2009-04-15

    The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. We analyzed data from 31,620 patients with no prior ADEs who started combination antiretroviral therapy. Cox proportional hazards models were used to estimate mortality hazard ratios for each ADE that occurred in >50 patients, after stratification by cohort and adjustment for sex, HIV transmission group, number of antiretroviral drugs initiated, regimen, age, date of starting combination antiretroviral therapy, and CD4+ cell count and HIV RNA load at initiation of combination antiretroviral therapy. ADEs that occurred in <50 patients were grouped together to form a "rare ADEs" category. During a median follow-up period of 43 months (interquartile range, 19-70 months), 2880 ADEs were diagnosed in 2262 patients; 1146 patients died. The most common ADEs were esophageal candidiasis (in 360 patients), Pneumocystis jiroveci pneumonia (320 patients), and Kaposi sarcoma (308 patients). The greatest mortality hazard ratio was associated with non-Hodgkin's lymphoma (hazard ratio, 17.59; 95% confidence interval, 13.84-22.35) and progressive multifocal leukoencephalopathy (hazard ratio, 10.0; 95% confidence interval, 6.70-14.92). Three groups of ADEs were identified on the basis of the ranked hazard ratios with bootstrapped confidence intervals: severe (non-Hodgkin's lymphoma and progressive multifocal leukoencephalopathy [hazard ratio, 7.26; 95% confidence interval, 5.55-9.48]), moderate (cryptococcosis, cerebral toxoplasmosis, AIDS dementia complex, disseminated Mycobacterium avium complex, and rare ADEs [hazard ratio, 2.35; 95% confidence interval, 1.76-3.13]), and mild (all other ADEs [hazard ratio, 1.47; 95% confidence interval, 1.08-2.00]). In the combination antiretroviral therapy era, mortality rates subsequent to an ADE depend on the specific diagnosis. The proposed classification of ADEs may be useful in clinical end point trials, prognostic studies, and patient management.

  20. Assessing uncertainties in superficial water provision by different bootstrap-based techniques

    NASA Astrophysics Data System (ADS)

    Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo Mario

    2014-05-01

    An assessment of water security can incorporate several water-related concepts, characterizing the interactions between societal needs, ecosystem functioning, and hydro-climatic conditions. The superficial freshwater provision level depends on the methods chosen for 'Environmental Flow Requirement' estimations, which integrate the sources of uncertainty in the understanding of how water-related threats to aquatic ecosystem security arise. Here, we develop an uncertainty assessment of superficial freshwater provision based on different bootstrap techniques (non-parametric resampling with replacement). To illustrate this approach, we use an agricultural basin (291 km2) within the Cantareira water supply system in Brazil monitored by one daily streamflow gage (24-year period). The original streamflow time series has been randomly resampled for different times or sample sizes (N = 500; ...; 1000), then applied to the conventional bootstrap approach and variations of this method, such as: 'nearest neighbor bootstrap'; and 'moving blocks bootstrap'. We have analyzed the impact of the sampling uncertainty on five Environmental Flow Requirement methods, based on: flow duration curves or probability of exceedance (Q90%, Q75% and Q50%); 7-day 10-year low-flow statistic (Q7,10); and presumptive standard (80% of the natural monthly mean ?ow). The bootstrap technique has been also used to compare those 'Environmental Flow Requirement' (EFR) methods among themselves, considering the difference between the bootstrap estimates and the "true" EFR characteristic, which has been computed averaging the EFR values of the five methods and using the entire streamflow record at monitoring station. This study evaluates the bootstrapping strategies, the representativeness of streamflow series for EFR estimates and their confidence intervals, in addition to overview of the performance differences between the EFR methods. The uncertainties arisen during EFR methods assessment will be propagated through water security indicators referring to water scarcity and vulnerability, seeking to provide meaningful support to end-users and water managers facing the incorporation of uncertainties in the decision making process.

  1. Cost-effectiveness of surgical decompression for space-occupying hemispheric infarction.

    PubMed

    Hofmeijer, Jeannette; van der Worp, H Bart; Kappelle, L Jaap; Eshuis, Sara; Algra, Ale; Greving, Jacoba P

    2013-10-01

    Surgical decompression reduces mortality and increases the probability of a favorable functional outcome after space-occupying hemispheric infarction. Its cost-effectiveness is uncertain. We assessed clinical outcomes, costs, and cost-effectiveness for the first 3 years in patients who were randomized to surgical decompression or best medical treatment within 48 hours after symptom onset in the Hemicraniectomy After Middle Cerebral Artery Infarction With Life-Threatening Edema Trial (HAMLET). Data on medical consumption were derived from case record files, hospital charts, and general practitioners. We calculated costs per quality-adjusted life year (QALY). Uncertainty was assessed with bootstrapping. A Markov model was constructed to estimate costs and health outcomes after 3 years. Of 39 patients enrolled within 48 hours, 21 were randomized to surgical decompression. After 3 years, 5 surgical (24%) and 14 medical patients (78%) had died. In the first 3 years after enrollment, operated patients had more QALYs than medically treated patients (mean difference, 1.0 QALY [95% confidence interval, 0.6-1.4]), but at higher costs (mean difference, €127,000 [95% confidence interval, 73,100-181,000]), indicating incremental costs of €127,000 per QALY gained. Ninety-eight percent of incremental cost-effectiveness ratios replicated by bootstrapping were >€80,000 per QALY gained. Markov modeling suggested costs of ≈€60,000 per QALY gained for a patient's lifetime. Surgical decompression for space-occupying infarction results in an increase in QALYs, but at very high costs. http://www.controlled-trials.com. Unique identifier: ISRCTN94237756.

  2. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  3. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  4. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures

    PubMed Central

    2013-01-01

    Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463

  5. Robust functional regression model for marginal mean and subject-specific inferences.

    PubMed

    Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo

    2017-01-01

    We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.

  6. Comparison of mode estimation methods and application in molecular clock analysis

    NASA Technical Reports Server (NTRS)

    Hedges, S. Blair; Shah, Prachi

    2003-01-01

    BACKGROUND: Distributions of time estimates in molecular clock studies are sometimes skewed or contain outliers. In those cases, the mode is a better estimator of the overall time of divergence than the mean or median. However, different methods are available for estimating the mode. We compared these methods in simulations to determine their strengths and weaknesses and further assessed their performance when applied to real data sets from a molecular clock study. RESULTS: We found that the half-range mode and robust parametric mode methods have a lower bias than other mode methods under a diversity of conditions. However, the half-range mode suffers from a relatively high variance and the robust parametric mode is more susceptible to bias by outliers. We determined that bootstrapping reduces the variance of both mode estimators. Application of the different methods to real data sets yielded results that were concordant with the simulations. CONCLUSION: Because the half-range mode is a simple and fast method, and produced less bias overall in our simulations, we recommend the bootstrapped version of it as a general-purpose mode estimator and suggest a bootstrap method for obtaining the standard error and 95% confidence interval of the mode.

  7. Hematologic and serum biochemical reference intervals for free-ranging common bottlenose dolphins (Tursiops truncatus) and variation in the distributions of clinicopathologic values related to geographic sampling site.

    PubMed

    Schwacke, Lori H; Hall, Ailsa J; Townsend, Forrest I; Wells, Randall S; Hansen, Larry J; Hohn, Aleta A; Bossart, Gregory D; Fair, Patricia A; Rowles, Teresa K

    2009-08-01

    To develop robust reference intervals for hematologic and serum biochemical variables by use of data derived from free-ranging bottlenose dolphins (Tursiops truncatus) and examine potential variation in distributions of clinicopathologic values related to sampling sites' geographic locations. 255 free-ranging bottlenose dolphins. Data from samples collected during multiple bottlenose dolphin capture-release projects conducted at 4 southeastern US coastal locations in 2000 through 2006 were combined to determine reference intervals for 52 clinicopathologic variables. A nonparametric bootstrap approach was applied to estimate 95th percentiles and associated 90% confidence intervals; the need for partitioning by length and sex classes was determined by testing for differences in estimated thresholds with a bootstrap method. When appropriate, quantile regression was used to determine continuous functions for 95th percentiles dependent on length. The proportion of out-of-range samples for all clinicopathologic measurements was examined for each geographic site, and multivariate ANOVA was applied to further explore variation in leukocyte subgroups. A need for partitioning by length and sex classes was indicated for many clinicopathologic variables. For each geographic site, few significant deviations from expected number of out-of-range samples were detected. Although mean leukocyte counts did not vary among sites, differences in the mean counts for leukocyte subgroups were identified. Although differences in the centrality of distributions for some variables were detected, the 95th percentiles estimated from the pooled data were robust and applicable across geographic sites. The derived reference intervals provide critical information for conducting bottlenose dolphin population health studies.

  8. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    PubMed

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  9. Modifications to the Patient Rule-Induction Method that utilize non-additive combinations of genetic and environmental effects to define partitions that predict ischemic heart disease.

    PubMed

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G; Tybjaerg-Hansen, Anne; Sing, Charles F

    2009-05-01

    This article extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (Genet Epidemiol 31:515-527) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2,258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors.

  10. Modifications to the Patient Rule-Induction Method that utilize non-additive combinations of genetic and environmental effects to define partitions that predict ischemic heart disease

    PubMed Central

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G.; Tybjærg-Hansen, Anne; Sing, Charles F.

    2009-01-01

    This paper extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (2007) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method (CPM) to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors. PMID:19025787

  11. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    PubMed Central

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  12. Efficiency determinants and capacity issues in Brazilian for-profit hospitals.

    PubMed

    Araújo, Cláudia; Barros, Carlos P; Wanke, Peter

    2014-06-01

    This paper reports on the use of different approaches for assessing efficiency of a sample of major Brazilian for-profit hospitals. Starting out with the bootstrapping technique, several DEA estimates were generated, allowing the use of confidence intervals and bias correction in central estimates to test for significant differences in efficiency levels and input-decreasing/output-increasing potentials. The findings indicate that efficiency is mixed in Brazilian for-profit hospitals. Opportunities for accommodating future demand appear to be scarce and strongly dependent on particular conditions related to the accreditation and specialization of a given hospital.

  13. Maturity associated variance in physical activity and health-related quality of life in adolescent females: a mediated effects model.

    PubMed

    Smart, Joan E Hunter; Cumming, Sean P; Sherar, Lauren B; Standage, Martyn; Neville, Helen; Malina, Robert M

    2012-01-01

    This study tested a mediated effects model of psychological and behavioral adaptation to puberty within the context of physical activity (PA). Biological maturity status, physical self-concept, PA, and health-related quality of life (HRQoL) were assessed in 222 female British year 7 to 9 pupils (mean age = 12.7 years, SD = .8). Structural equation modeling using maximum likelihood estimation and bootstrapping procedures supported the hypothesized model. Maturation status was inversely related to perceptions of sport competence, body attractiveness, and physical condition; and indirectly and inversely related to physical self-worth, PA, and HRQoL. Examination of the bootstrap-generated bias-corrected confidence intervals representing the direct and indirect paths between suggested that physical self-concept partially mediated the relations between maturity status and PA, and maturity status and HRQoL. Evidence supports the contention that perceptions of the physical self partially mediate relations maturity, PA, and HRQoL in adolescent females.

  14. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  15. Exploring the Replicability of a Study's Results: Bootstrap Statistics for the Multivariate Case.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    1995-01-01

    Use of the bootstrap method in a canonical correlation analysis to evaluate the replicability of a study's results is illustrated. More confidence may be vested in research results that replicate. (SLD)

  16. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation.

    PubMed

    Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A

    2016-10-26

    Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.

  17. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-06-27

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.

  18. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis

    NASA Astrophysics Data System (ADS)

    Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles

    2018-03-01

    We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.

  20. A Pilot Investigation of the Relationship between Climate Variability and Milk Compounds under the Bootstrap Technique

    PubMed Central

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2015-01-01

    This study analyzes the linear relationship between climate variables and milk components in Iran by applying bootstrapping to include and assess the uncertainty. The climate parameters, Temperature Humidity Index (THI) and Equivalent Temperature Index (ETI) are computed from the NASA-Modern Era Retrospective-Analysis for Research and Applications (NASA-MERRA) reanalysis (2002–2010). Milk data for fat, protein (measured on fresh matter bases), and milk yield are taken from 936,227 milk records for the same period, using cows fed by natural pasture from April to September. Confidence intervals for the regression model are calculated using the bootstrap technique. This method is applied to the original times series, generating statistically equivalent surrogate samples. As a result, despite the short time data and the related uncertainties, an interesting behavior of the relationships between milk compound and the climate parameters is visible. During spring only, a weak dependency of milk yield and climate variations is obvious, while fat and protein concentrations show reasonable correlations. In summer, milk yield shows a similar level of relationship with ETI, but not with temperature and THI. We suggest this methodology for studies in the field of the impacts of climate change and agriculture, also environment and food with short-term data. PMID:28231215

  1. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  2. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  3. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  4. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  5. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines

    PubMed Central

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.

    2016-01-01

    Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915

  6. Confidence Intervals for Laboratory Sonic Boom Annoyance Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Commercial supersonic flight is currently forbidden over land because sonic booms have historically caused unacceptable annoyance levels in overflown communities. NASA is providing data and expertise to noise regulators as they consider relaxing the ban for future quiet supersonic aircraft. One deliverable NASA will provide is a predictive model for indoor annoyance to aid in setting an acceptable quiet sonic boom threshold. A laboratory study was conducted to determine how indoor vibrations caused by sonic booms affect annoyance judgments. The test method required finding the point of subjective equality (PSE) between sonic boom signals that cause vibrations and signals not causing vibrations played at various amplitudes. This presentation focuses on a few statistical techniques for estimating the interval around the PSE. The techniques examined are the Delta Method, Parametric and Nonparametric Bootstrapping, and Bayesian Posterior Estimation.

  7. Application of the Bootstrap Statistical Method in Deriving Vibroacoustic Specifications

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; Paez, Thomas L.

    2006-01-01

    This paper discusses the Bootstrap Method for specification of vibroacoustic test specifications. Vibroacoustic test specifications are necessary to properly accept or qualify a spacecraft and its components for the expected acoustic, random vibration and shock environments seen on an expendable launch vehicle. Traditionally, NASA and the U.S. Air Force have employed methods of Normal Tolerance Limits to derive these test levels based upon the amount of data available, and the probability and confidence levels desired. The Normal Tolerance Limit method contains inherent assumptions about the distribution of the data. The Bootstrap is a distribution-free statistical subsampling method which uses the measured data themselves to establish estimates of statistical measures of random sources. This is achieved through the computation of large numbers of Bootstrap replicates of a data measure of interest and the use of these replicates to derive test levels consistent with the probability and confidence desired. The comparison of the results of these two methods is illustrated via an example utilizing actual spacecraft vibroacoustic data.

  8. Statistical inference for tumor growth inhibition T/C ratio.

    PubMed

    Wu, Jianrong

    2010-09-01

    The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.

  9. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  10. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    NASA Astrophysics Data System (ADS)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  11. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  12. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.

  13. Prognostic value of fasting versus nonfasting low-density lipoprotein cholesterol levels on long-term mortality: insight from the National Health and Nutrition Examination Survey III (NHANES-III).

    PubMed

    Doran, Bethany; Guo, Yu; Xu, Jinfeng; Weintraub, Howard; Mora, Samia; Maron, David J; Bangalore, Sripal

    2014-08-12

    National and international guidelines recommend fasting lipid panel measurement for risk stratification of patients for prevention of cardiovascular events. However, the prognostic value of fasting versus nonfasting low-density lipoprotein cholesterol (LDL-C) is uncertain. Patients enrolled in the National Health and Nutrition Examination Survey III (NHANES-III), a nationally representative cross-sectional survey performed from 1988 to 1994, were stratified on the basis of fasting status (≥8 or <8 hours) and followed for a mean of 14.0 (±0.22) years. Propensity score matching was used to assemble fasting and nonfasting cohorts with similar baseline characteristics. The risk of outcomes as a function of LDL-C and fasting status was assessed with the use of receiver operating characteristic curves and bootstrapping methods. The interaction between fasting status and LDL-C was assessed with Cox proportional hazards modeling. Primary outcome was all-cause mortality. Secondary outcome was cardiovascular mortality. One-to-one matching based on propensity score yielded 4299 pairs of fasting and nonfasting individuals. For the primary outcome, fasting LDL-C yielded prognostic value similar to that for nonfasting LDL-C (C statistic=0.59 [95% confidence interval, 0.57-0.61] versus 0.58 [95% confidence interval, 0.56-0.60]; P=0.73), and LDL-C by fasting status interaction term in the Cox proportional hazards model was not significant (Pinteraction=0.11). Similar results were seen for the secondary outcome (fasting versus nonfasting C statistic=0.62 [95% confidence interval, 0.60-0.66] versus 0.62 [95% confidence interval, 0.60-0.66]; P=0.96; Pinteraction=0.34). Nonfasting LDL-C has prognostic value similar to that of fasting LDL-C. National and international agencies should consider reevaluating the recommendation that patients fast before obtaining a lipid panel. © 2014 American Heart Association, Inc.

  14. Estimating the number of motor units using random sums with independently thinned terms.

    PubMed

    Müller, Samuel; Conforto, Adriana Bastos; Z'graggen, Werner J; Kaelin-Lang, Alain

    2006-07-01

    The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.

  15. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  16. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  17. Prenatal Drug Exposure and Adolescent Cortisol Reactivity: Association with Behavioral Concerns.

    PubMed

    Buckingham-Howes, Stacy; Mazza, Dayna; Wang, Yan; Granger, Douglas A; Black, Maureen M

    2016-09-01

    To examine stress reactivity in a sample of adolescents with prenatal drug exposure (PDE) by examining the consequences of PDE on stress-related adrenocortical reactivity, behavioral problems, and drug experimentation during adolescence. Participants (76 PDE, 61 non-drug exposed [NE]; 99% African-American; 50% male; mean age = 14.17 yr, SD = 1.17) provided a urine sample, completed a drug use questionnaire, and provided saliva samples (later assayed for cortisol) before and after a mild laboratory stress task. Caregivers completed the Behavior Assessment System for Children, Second Edition (BASC II) and reported their relationship to the adolescent. The NE group was more likely to exhibit task-related cortisol reactivity compared to the PDE group. Overall behavior problems and drug experimentation were comparable across groups with no differences between PDE and NE groups. In unadjusted mediation analyses, cortisol reactivity mediated the association between PDE and BASC II aggression scores (95% bootstrap confidence interval [CI], 0.04-4.28), externalizing problems scores (95% bootstrap CI, 0.03-4.50), and drug experimentation (95% bootstrap CI, 0.001-0.54). The associations remain with the inclusion of gender as a covariate but not when age is included. Findings support and expand current research in cortisol reactivity and PDE by demonstrating that cortisol reactivity attenuates the association between PDE and behavioral problems (aggression) and drug experimentation. If replicated, PDE may have long-lasting effects on stress-sensitive physiological mechanisms associated with behavioral problems (aggression) and drug experimentation in adolescence.

  18. A note on the kappa statistic for clustered dichotomous data.

    PubMed

    Zhou, Ming; Yang, Zhao

    2014-06-30

    The kappa statistic is widely used to assess the agreement between two raters. Motivated by a simulation-based cluster bootstrap method to calculate the variance of the kappa statistic for clustered physician-patients dichotomous data, we investigate its special correlation structure and develop a new simple and efficient data generation algorithm. For the clustered physician-patients dichotomous data, based on the delta method and its special covariance structure, we propose a semi-parametric variance estimator for the kappa statistic. An extensive Monte Carlo simulation study is performed to evaluate the performance of the new proposal and five existing methods with respect to the empirical coverage probability, root-mean-square error, and average width of the 95% confidence interval for the kappa statistic. The variance estimator ignoring the dependence within a cluster is generally inappropriate, and the variance estimators from the new proposal, bootstrap-based methods, and the sampling-based delta method perform reasonably well for at least a moderately large number of clusters (e.g., the number of clusters K ⩾50). The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods. Moreover, the new proposal has acceptable performance even when the number of clusters is as small as K = 25. To illustrate the practical application of all the methods, one psychiatric research data and two simulated clustered physician-patients dichotomous data are analyzed. Copyright © 2014 John Wiley & Sons, Ltd.

  19. HIV-1 Transmission During Recent Infection and During Treatment Interruptions as Major Drivers of New Infections in the Swiss HIV Cohort Study.

    PubMed

    Marzel, Alex; Shilaih, Mohaned; Yang, Wan-Lin; Böni, Jürg; Yerly, Sabine; Klimkait, Thomas; Aubert, Vincent; Braun, Dominique L; Calmy, Alexandra; Furrer, Hansjakob; Cavassini, Matthias; Battegay, Manuel; Vernazza, Pietro L; Bernasconi, Enos; Günthard, Huldrych F; Kouyos, Roger D; Aubert, V; Battegay, M; Bernasconi, E; Böni, J; Bucher, H C; Burton-Jeangros, C; Calmy, A; Cavassini, M; Dollenmaier, G; Egger, M; Elzi, L; Fehr, J; Fellay, J; Furrer, H; Fux, C A; Gorgievski, M; Günthard, H F; Haerry, D; Hasse, B; Hirsch, H H; Hoffmann, M; Hösli, I; Kahlert, C; Kaiser, L; Keiser, O; Klimkait, T; Kouyos, R D; Kovari, H; Ledergerber, B; Martinetti, G; de Tejada, B Martinez; Metzner, K; Müller, N; Nadal, D; Nicca, D; Pantaleo, G; Rauch, A; Regenass, S; Rickenbach, M; Rudin, C; Schöni-Affolter, F; Schmid, P; Schüpbach, J; Speck, R; Tarr, P; Trkola, A; Vernazza, P L; Weber, R; Yerly, S

    2016-01-01

    Reducing the fraction of transmissions during recent human immunodeficiency virus (HIV) infection is essential for the population-level success of "treatment as prevention". A phylogenetic tree was constructed with 19 604 Swiss sequences and 90 994 non-Swiss background sequences. Swiss transmission pairs were identified using 104 combinations of genetic distance (1%-2.5%) and bootstrap (50%-100%) thresholds, to examine the effect of those criteria. Monophyletic pairs were classified as recent or chronic transmission based on the time interval between estimated seroconversion dates. Logistic regression with adjustment for clinical and demographic characteristics was used to identify risk factors associated with transmission during recent or chronic infection. Seroconversion dates were estimated for 4079 patients on the phylogeny, and comprised between 71 (distance, 1%; bootstrap, 100%) to 378 transmission pairs (distance, 2.5%; bootstrap, 50%). We found that 43.7% (range, 41%-56%) of the transmissions occurred during the first year of infection. Stricter phylogenetic definition of transmission pairs was associated with higher recent-phase transmission fraction. Chronic-phase viral load area under the curve (adjusted odds ratio, 3; 95% confidence interval, 1.64-5.48) and time to antiretroviral therapy (ART) start (adjusted odds ratio 1.4/y; 1.11-1.77) were associated with chronic-phase transmission as opposed to recent transmission. Importantly, at least 14% of the chronic-phase transmission events occurred after the transmitter had interrupted ART. We demonstrate a high fraction of transmission during recent HIV infection but also chronic transmissions after interruption of ART in Switzerland. Both represent key issues for treatment as prevention and underline the importance of early diagnosis and of early and continuous treatment. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  20. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  1. Predicting Postsurgical Satisfaction in Adolescents With Idiopathic Scoliosis: The Role of Presurgical Functioning and Expectations.

    PubMed

    Sieberg, Christine B; Manganella, Juliana; Manalo, Gem; Simons, Laura E; Hresko, M Timothy

    2017-12-01

    There is a need to better assess patient satisfaction and surgical outcomes. The purpose of the current study is to identify how preoperative expectations can impact postsurgical satisfaction among youth with adolescent idiopathic scoliosis undergoing spinal fusion surgery. The present study includes patients with adolescent idiopathic scoliosis undergoing spinal fusion surgery enrolled in a prospective, multicentered registry examining postsurgical outcomes. The Scoliosis Research Society Questionnaire-Version 30, which assesses pain, self-image, mental health, and satisfaction with management, along with the Spinal Appearance Questionnaire, which measures surgical expectations was administered to 190 patients before surgery and 1 and 2 years postoperatively. Regression analyses with bootstrapping (with n=5000 bootstrap samples) were conducted with 99% bias-corrected confidence intervals to examine the extent to which preoperative expectations for spinal appearance mediated the relationship between presurgical mental health and pain and 2-year postsurgical satisfaction. Results indicate that preoperative mental health, pain, and expectations are predictive of postsurgical satisfaction. With the shifting health care system, physicians may want to consider patient mental health, pain, and expectations before surgery to optimize satisfaction and ultimately improve clinical care and patient outcomes. Level I-prognostic study.

  2. Peace of Mind, Academic Motivation, and Academic Achievement in Filipino High School Students.

    PubMed

    Datu, Jesus Alfonso D

    2017-04-09

    Recent literature has recognized the advantageous role of low-arousal positive affect such as feelings of peacefulness and internal harmony in collectivist cultures. However, limited research has explored the benefits of low-arousal affective states in the educational setting. The current study examined the link of peace of mind (PoM) to academic motivation (i.e., amotivation, controlled motivation, and autonomous motivation) and academic achievement among 525 Filipino high school students. Findings revealed that PoM was positively associated with academic achievement β = .16, p < .05, autonomous motivation β = .48, p < .001, and controlled motivation β = .25, p < .01. As expected, PoM was negatively related to amotivation β = -.19, p < .05, and autonomous motivation was positively associated with academic achievement β = .52, p < .01. Furthermore, the results of bias-corrected bootstrap analyses at 95% confidence interval based on 5,000 bootstrapped resamples demonstrated that peace of mind had an indirect influence on academic achievement through the mediating effects of autonomous motivation. In terms of the effect sizes, the findings showed that PoM explained about 1% to 18% of the variance in academic achievement and motivation. The theoretical and practical implications of the results are elucidated.

  3. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  4. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    NASA Astrophysics Data System (ADS)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  5. Mesomorphy correlates with experiential cognitive style.

    PubMed

    Genovese, Jeremy E C; Little, Kathleen D

    2011-01-01

    The purpose of this study was to test for a relationship between mesomorphy and experiential cognitive style (S. Epstein, 1994) in a sample of university students (30 women and 24 men). Anthropometric somatotypes were obtained using the Heath-Carter procedure (J. E. L. Carter, 2002). Experiential cognitive style was operationalized as scores on the experiential scale of the Rational Experiential Inventory for Adolescents (A. D. Marks, D. W. Hine, R. L. Blore, & W. J. Phillips, 2008). Nonparametric bootstrap correlations were calculated using 80% confidence intervals. There were significant correlations between mesomorphy and experiential cognitive style for men (r(s) = .33) and women (r(s) = .25). For men, experiential cognitive style was also correlated with endomorphy (r(s) = .39) and ectomorphy (rs = -.48).

  6. Interlaboratory Reproducibility and Proficiency Testing within the Human Papillomavirus Cervical Cancer Screening Program in Catalonia, Spain

    PubMed Central

    Ibáñez, R.; Félez-Sánchez, M.; Godínez, J. M.; Guardià, C.; Caballero, E.; Juve, R.; Combalia, N.; Bellosillo, B.; Cuevas, D.; Moreno-Crespi, J.; Pons, L.; Autonell, J.; Gutierrez, C.; Ordi, J.; de Sanjosé, S.

    2014-01-01

    In Catalonia, a screening protocol for cervical cancer, including human papillomavirus (HPV) DNA testing using the Digene Hybrid Capture 2 (HC2) assay, was implemented in 2006. In order to monitor interlaboratory reproducibility, a proficiency testing (PT) survey of the HPV samples was launched in 2008. The aim of this study was to explore the repeatability of the HC2 assay's performance. Participating laboratories provided 20 samples annually, 5 randomly chosen samples from each of the following relative light unit (RLU) intervals: <0.5, 0.5 to 0.99, 1 to 9.99, and ≥10. Kappa statistics were used to determine the agreement levels between the original and the PT readings. The nature and origin of the discrepant results were calculated by bootstrapping. A total of 946 specimens were retested. The kappa values were 0.91 for positive/negative categorical classification and 0.79 for the four RLU intervals studied. Sample retesting yielded systematically lower RLU values than the original test (P < 0.005), independently of the time elapsed between the two determinations (median, 53 days), possibly due to freeze-thaw cycles. The probability for a sample to show clinically discrepant results upon retesting was a function of the RLU value; samples with RLU values in the 0.5 to 5 interval showed 10.80% probability to yield discrepant results (95% confidence interval [CI], 7.86 to 14.33) compared to 0.85% probability for samples outside this interval (95% CI, 0.17 to 1.69). Globally, the HC2 assay shows high interlaboratory concordance. We have identified differential confidence thresholds and suggested the guidelines for interlaboratory PT in the future, as analytical quality assessment of HPV DNA detection remains a central component of the screening program for cervical cancer prevention. PMID:24574284

  7. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  8. Variabilities in probabilistic seismic hazard maps for natural and induced seismicity in the central and eastern United States

    USGS Publications Warehouse

    Mousavi, S. Mostafa; Beroza, Gregory C.; Hoover, Susan M.

    2018-01-01

    Probabilistic seismic hazard analysis (PSHA) characterizes ground-motion hazard from earthquakes. Typically, the time horizon of a PSHA forecast is long, but in response to induced seismicity related to hydrocarbon development, the USGS developed one-year PSHA models. In this paper, we present a display of the variability in USGS hazard curves due to epistemic uncertainty in its informed submodel using a simple bootstrapping approach. We find that variability is highest in low-seismicity areas. On the other hand, areas of high seismic hazard, such as the New Madrid seismic zone or Oklahoma, exhibit relatively lower variability simply because of more available data and a better understanding of the seismicity. Comparing areas of high hazard, New Madrid, which has a history of large naturally occurring earthquakes, has lower forecast variability than Oklahoma, where the hazard is driven mainly by suspected induced earthquakes since 2009. Overall, the mean hazard obtained from bootstrapping is close to the published model, and variability increased in the 2017 one-year model relative to the 2016 model. Comparing the relative variations caused by individual logic-tree branches, we find that the highest hazard variation (as measured by the 95% confidence interval of bootstrapping samples) in the final model is associated with different ground-motion models and maximum magnitudes used in the logic tree, while the variability due to the smoothing distance is minimal. It should be pointed out that this study is not looking at the uncertainty in the hazard in general, but only as it is represented in the USGS one-year models.

  9. Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.

    2012-12-01

    Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.

  10. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Analyzing hospitalization data: potential limitations of Poisson regression.

    PubMed

    Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R

    2015-08-01

    Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  12. Reference interval computation: which method (not) to choose?

    PubMed

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Terrestrial laser scanning used to detect asymmetries in boat hulls

    NASA Astrophysics Data System (ADS)

    Roca-Pardiñas, Javier; López-Alvarez, Francisco; Ordóñez, Celestino; Menéndez, Agustín; Bernardo-Sánchez, Antonio

    2012-01-01

    We describe a methodology for identifying asymmetries in boat hull sections reconstructed from point clouds captured using a terrestrial laser scanner (TLS). A surface was first fit to the point cloud using a nonparametric regression method that permitted the construction of a continuous smooth surface. Asymmetries in cross-sections of the surface were identified using a bootstrap resampling technique that took into account uncertainty in the coordinates of the scanned points. Each reconstructed section was analyzed to check, for a given level of significance, that it was within the confidence interval for the theoretical symmetrical section. The method was applied to the study of asymmetries in a medium-sized yacht. Identified were differences of up to 5 cm between the real and theoretical sections in some parts of the hull.

  14. A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons

    DTIC Science & Technology

    2001-07-01

    parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for

  15. Uncertainty Quantification in High Throughput Screening ...

    EPA Pesticide Factsheets

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  16. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.

  17. Performance of statistical models to predict mental health and substance abuse cost.

    PubMed

    Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K

    2006-10-26

    Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.

  18. Interspecies Interactions Reverse the Hazard of Antibiotics Exposure: A Plankton Community Study on Responses to Ciprofloxacin hydrochloride.

    PubMed

    Wang, Changyou; Wang, Ziyang; Zhang, Yong; Su, Rongguo

    2017-05-24

    The ecotoxicological effects of Ciprofloxacin hydrochloride (CIP) were tested on population densities of plankton assemblages consisting of two algae (Isochrysis galbana and Platymonas subcordiformis) and a rotifer (Brachionus plicatilis). The I. galbana showed a significant decrease in densities when concentrations of CIP were above 2.0 mg L -1 in single-species tests, while P. subcordiformis and B. plicatilis were stable in densities when CIP were less than10.0 mg L -1 . The equilibrium densities of I. galbana in community test increased with CIP concentrations after falling to a trough at 5.0 mg L -1 , showed a completely different pattern of P. subcordiformis which decreased with CIP concentrations after reaching a peak at 30.0 mg L -1 . The observed beneficial effect was a result of interspecies interactions of trophic cascade that buffered for more severe direct effects of toxicants. The community test-based NOEC of CIP (2.0 mg L -1 ), embodying the indirect effects, was different from the extrapolated one derived by single-species tests (0.5 mg L -1 ), but all lacked confidence interval. A CIP threshold concentration of obvious relevance to ecological interaction was calculated with a simplified plankton ecological model, achieving a value of 1.26 mg L -1 with a 95% bootstrapping confidence interval from 1.18 to 1.31 mg L -1 .

  19. Estimation of the Standardized Risk Difference and Ratio in a Competing Risks Framework: Application to Injection Drug Use and Progression to AIDS After Initiation of Antiretroviral Therapy

    PubMed Central

    Cole, Stephen R.; Lau, Bryan; Eron, Joseph J.; Brookhart, M. Alan; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.; Cole, Stephen R.; Brookhart, M. Alan; Lau, Bryan; Eron, Joseph J.; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.

    2015-01-01

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. PMID:24966220

  20. Evaluating the efficiency of environmental monitoring programs

    USGS Publications Warehouse

    Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina

    2014-01-01

    Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.

  1. General Mathematical Ability Predicts PASAT Performance in MS Patients: Implications for Clinical Interpretation and Cognitive Reserve.

    PubMed

    Sandry, Joshua; Paxton, Jessica; Sumowski, James F

    2016-03-01

    The Paced Auditory Serial Addition Test (PASAT) is used to assess cognitive status in multiple sclerosis (MS). Although the mathematical demands of the PASAT seem minor (single-digit arithmetic), cognitive psychology research links greater mathematical ability (e.g., algebra, calculus) to more rapid retrieval of single-digit math facts (e.g., 5+6=11). The present study evaluated the hypotheses that (a) mathematical ability is related to PASAT performance and (b) both the relationship between intelligence and PASAT performance as well as the relationship between education and PASAT performance are both mediated by mathematical ability. Forty-five MS patients were assessed using the Wechsler Test of Adult Reading, PASAT and Calculation Subtest of the Woodcock-Johnson-III. Regression based path analysis and bootstrapping were used to compute 95% confidence intervals and test for mediation. Mathematical ability (a) was related to PASAT (β=.61; p<.001) and (b) fully mediated the relationship between Intelligence and PASAT (β=.76; 95% confidence interval (CI95)=.28, 1.45; direct effect of Intelligence, β=.42; CI95=-.39, 1.23) as well as the relationship between Education and PASAT (β=2.43, CI95=.81, 5.16, direct effect of Education, β=.83, CI95=-1.95, 3.61). Mathematical ability represents a source of error in the clinical interpretation of cognitive decline using the PASAT. Domain-specific cognitive reserve is discussed.

  2. Examining mediators of child sexual abuse and sexually transmitted infections.

    PubMed

    Sutherland, Melissa A

    2011-01-01

    Interpersonal violence has increasingly been identified as a risk factor for sexually transmitted infections. Understanding the pathways between violence and sexually transmitted infections is essential to designing effective interventions. The aim of this study was to examine dissociative symptoms, alcohol use, and intimate partner physical violence and sexual coercion as mediators of child sexual abuse and lifetime sexually transmitted infection diagnosis among a sample of women. A convenience sample of 202 women was recruited from healthcare settings, with 189 complete cases for analysis. A multiple mediation model tested the proposed mediators of child sexual abuse and lifetime sexually transmitted infection diagnosis. Bootstrapping, a resampling method, was used to test for mediation. Key variables included child sexual abuse, dissociative symptoms, alcohol use, and intimate partner violence. Child sexual abuse was reported by 46% of the study participants (n = 93). Child sexual abuse was found to have an indirect effect on lifetime sexually transmitted infection diagnosis, with the effect occurring through dissociative symptoms (95% confidence interval = 0.0033-0.4714) and sexual coercion (95% confidence interval = 0.0359-0.7694). Alcohol use and physical violence were not found to be significant mediators. This study suggests that dissociation and intimate partner sexual coercion are important mediators of child sexual abuse and sexually transmitted infection diagnosis. Therefore, interventions that consider the roles of dissociative symptoms and interpersonal violence may be effective in preventing sexually transmitted infections among women.

  3. Home-based care after a shortened hospital stay versus hospital-based care postpartum: an economic evaluation.

    PubMed

    Petrou, Stavros; Boulvain, Michel; Simon, Judit; Maricot, Patrice; Borst, François; Perneger, Thomas; Irion, Olivier

    2004-08-01

    To compare the cost effectiveness of early postnatal discharge and home midwifery support with a traditional postnatal hospital stay. Cost minimisation analysis within a pragmatic randomised controlled trial. The University Hospital of Geneva and its catchment area. Four hundred and fifty-nine deliveries of a single infant at term following an uncomplicated pregnancy. Prospective economic evaluation alongside a randomised controlled trial in which women were allocated to either early postnatal discharge combined with home midwifery support (n= 228) or a traditional postnatal hospital stay (n= 231). Costs (Swiss francs, 2000 prices) to the health service, social services, patients, carers and society accrued between delivery and 28 days postpartum. Clinical and psychosocial outcomes were similar in the two trial arms. Early postnatal discharge combined with home midwifery support resulted in a significant reduction in postnatal hospital care costs (bootstrap mean difference 1524 francs, 95% confidence interval [CI] 675 to 2403) and a significant increase in community care costs (bootstrap mean difference 295 francs, 95% CI 245 to 343). There were no significant differences in average hospital readmission, hospital outpatient care, direct non-medical and indirect costs between the two trial groups. Overall, early postnatal discharge combined with home midwifery support resulted in a significant cost saving of 1221 francs per mother-infant dyad (bootstrap mean difference 1209 francs, 95% CI 202 to 2155). This finding remained relatively robust following variations in the values of key economic parameters performed as part of a comprehensive sensitivity analysis. A policy of early postnatal discharge combined with home midwifery support exhibits weak economic dominance over traditional postnatal care, that is, it significantly reduces costs without compromising the health and wellbeing of the mother and infant.

  4. Sevelamer is cost-saving vs. calcium carbonate in non-dialysis-dependent CKD patients in italy: a patient-level cost-effectiveness analysis of the INDEPENDENT study.

    PubMed

    Ruggeri, Matteo; Cipriani, Filippo; Bellasi, Antonio; Russo, Domenico; Di Iorio, Biagio

    2014-01-01

    To conduct a cost-effectiveness analysis of sevelamer versus calcium carbonate in patients with non-dialysis-dependent CKD (NDD-CKD) from the Italian NHS perspective using patient-level data from the INDEPENDENT-CKD study. Patient-level data on all-cause mortality, dialysis inception and phosphate binder dose were obtained for all 107 sevelamer and 105 calcium carbonate patients from the INDEPENDENT-CKD study. Hospitalization and frequency of dialysis data were collected post hoc for all patients via a retrospective chart review. Phosphate binder, hospitalization, and dialysis costs were expressed in 2012 euros using hospital pharmacy, Italian diagnosis-related group and ambulatory tariffs, respectively. Total life years (LYs) and costs per treatment group were calculated for the 3-year period of the study. Bootstrapping was used to estimate confidence intervals around outcomes, costs, and cost-effectiveness and to calculate the cost-effectiveness acceptability curve. A subgroup analysis of patients who did not initiate dialysis during the INDEPENDENT-CKD study was also conducted. Sevelamer was associated with 0.06 additional LYs (95% CI -0.04 to 0.16) and cost savings of EUR -5,615 (95% CI -10,066 to -1,164) per patient compared with calcium carbonate. On the basis of the bootstrap analysis, sevelamer was dominant compared to calcium carbonate in 87.1% of 10,000 bootstrap replicates. Similar results were observed in the subgroup analysis. RESULTS were driven by a significant reduction in all-cause mortality and significantly fewer hospitalizations in the sevelamer group, which offset the higher acquisition cost for sevelamer. Sevelamer provides more LYs and is less costly than calcium carbonate in patients with NDD-CKD in Italy.

  5. Predicting survival of men with recurrent prostate cancer after radical prostatectomy.

    PubMed

    Dell'Oglio, Paolo; Suardi, Nazareno; Boorjian, Stephen A; Fossati, Nicola; Gandaglia, Giorgio; Tian, Zhe; Moschini, Marco; Capitanio, Umberto; Karakiewicz, Pierre I; Montorsi, Francesco; Karnes, R Jeffrey; Briganti, Alberto

    2016-02-01

    To develop and externally validate a novel nomogram aimed at predicting cancer-specific mortality (CSM) after biochemical recurrence (BCR) among prostate cancer (PCa) patients treated with radical prostatectomy (RP) with or without adjuvant external beam radiotherapy (aRT) and/or hormonal therapy (aHT). The development cohort included 689 consecutive PCa patients treated with RP between 1987 and 2011 with subsequent BCR, defined as two subsequent prostate-specific antigen values >0.2 ng/ml. Multivariable competing-risks regression analyses tested the predictors of CSM after BCR for the purpose of 5-year CSM nomogram development. Validation (2000 bootstrap resamples) was internally tested. External validation was performed into a population of 6734 PCa patients with BCR after treatment with RP at the Mayo Clinic from 1987 to 2011. The predictive accuracy (PA) was quantified using the receiver operating characteristic-derived area under the curve and the calibration plot method. The 5-year CSM-free survival rate was 83.6% (confidence interval [CI]: 79.6-87.2). In multivariable analyses, pathologic stage T3b or more (hazard ratio [HR]: 7.42; p = 0.008), pathologic Gleason score 8-10 (HR: 2.19; p = 0.003), lymph node invasion (HR: 3.57; p = 0.001), time to BCR (HR: 0.99; p = 0.03) and age at BCR (HR: 1.04; p = 0.04), were each significantly associated with the risk of CSM after BCR. The bootstrap-corrected PA was 87.4% (bootstrap 95% CI: 82.0-91.7%). External validation of our nomogram showed a good PA at 83.2%. We developed and externally validated the first nomogram predicting 5-year CSM applicable to contemporary patients with BCR after RP with or without adjuvant treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    PubMed

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  7. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  8. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  9. The Effect of Social Problem Solving Skills in the Relationship between Traumatic Stress and Moral Disengagement among Inner-City African American High School Students

    PubMed Central

    Coker, Kendell L.; Ikpe, Uduakobong N.; Brooks, Jeannie S.; Page, Brian; Sobell, Mark B.

    2014-01-01

    This study examined the relationship between traumatic stress, social problem solving, and moral disengagement among African American inner-city high school students. Participants consisted of 45 (25 males and 20 females) African American students enrolled in grades 10 through 12. Mediation was assessed by testing for the indirect effect using the confidence interval derived from 10,000 bootstrapped resamples. The results revealed that social problem-solving skills have an indirect effect on the relationship between traumatic stress and moral disengagement. The findings suggest that African American youth that are negatively impacted by trauma evidence deficits in their social problem solving skills and are likely to be at an increased risk to morally disengage. Implications for culturally sensitive and trauma-based intervention programs are also provided. PMID:25071874

  10. Confidence Intervals for Omega Coefficient: Proposal for Calculus.

    PubMed

    Ventura-León, José Luis

    2018-01-01

    La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.

  11. Prey Selection by an Apex Predator: The Importance of Sampling Uncertainty

    PubMed Central

    Davis, Miranda L.; Stephens, Philip A.; Willis, Stephen G.; Bassi, Elena; Marcon, Andrea; Donaggio, Emanuela; Capitani, Claudia; Apollonio, Marco

    2012-01-01

    The impact of predation on prey populations has long been a focus of ecologists, but a firm understanding of the factors influencing prey selection, a key predictor of that impact, remains elusive. High levels of variability observed in prey selection may reflect true differences in the ecology of different communities but might also reflect a failure to deal adequately with uncertainties in the underlying data. Indeed, our review showed that less than 10% of studies of European wolf predation accounted for sampling uncertainty. Here, we relate annual variability in wolf diet to prey availability and examine temporal patterns in prey selection; in particular, we identify how considering uncertainty alters conclusions regarding prey selection. Over nine years, we collected 1,974 wolf scats and conducted drive censuses of ungulates in Alpe di Catenaia, Italy. We bootstrapped scat and census data within years to construct confidence intervals around estimates of prey use, availability and selection. Wolf diet was dominated by boar (61.5±3.90 [SE] % of biomass eaten) and roe deer (33.7±3.61%). Temporal patterns of prey densities revealed that the proportion of roe deer in wolf diet peaked when boar densities were low, not when roe deer densities were highest. Considering only the two dominant prey types, Manly's standardized selection index using all data across years indicated selection for boar (mean = 0.73±0.023). However, sampling error resulted in wide confidence intervals around estimates of prey selection. Thus, despite considerable variation in yearly estimates, confidence intervals for all years overlapped. Failing to consider such uncertainty could lead erroneously to the assumption of differences in prey selection among years. This study highlights the importance of considering temporal variation in relative prey availability and accounting for sampling uncertainty when interpreting the results of dietary studies. PMID:23110122

  12. BRIDGING GAPS BETWEEN ZOO AND WILDLIFE MEDICINE: ESTABLISHING REFERENCE INTERVALS FOR FREE-RANGING AFRICAN LIONS (PANTHERA LEO).

    PubMed

    Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna

    2017-06-01

    The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.

  13. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    PubMed

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Confidence limits for contribution plots in multivariate statistical process control using bootstrap estimates.

    PubMed

    Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund

    2016-02-18

    In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Assessing Uncertainties in Surface Water Security: A Probabilistic Multi-model Resampling approach

    NASA Astrophysics Data System (ADS)

    Rodrigues, D. B. B.

    2015-12-01

    Various uncertainties are involved in the representation of processes that characterize interactions between societal needs, ecosystem functioning, and hydrological conditions. Here, we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multi-model and resampling framework. We consider several uncertainty sources including those related to: i) observed streamflow data; ii) hydrological model structure; iii) residual analysis; iv) the definition of Environmental Flow Requirement method; v) the definition of critical conditions for water provision; and vi) the critical demand imposed by human activities. We estimate the overall uncertainty coming from the hydrological model by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km² agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multi-model framework and provided by each model uncertainty estimation approach. The method is general and can be easily extended forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision making process.

  16. Effects of Changes in Potassium With Valsartan Use on Diabetes Risk: Nateglinide and Valsartan in Impaired Glucose Tolerance Outcomes Research (NAVIGATOR) Trial

    PubMed Central

    Thomas, Laine; Svetkey, Laura; Brancati, Frederick L.; Califf, Robert M.; Edelman, David

    2013-01-01

    BACKGROUND Low and low-normal serum potassium is associated with an increased risk of diabetes. We hypothesized that the protective effect of valsartan on diabetes risk could be mediated by its effect of raising serum potassium. METHODS We analyzed data from the Nateglinide and Valsartan in Impaired Glucose Tolerance Outcomes Research (NAVIGATOR) trial, which randomized participants at risk for diabetes to either valsartan (up to 160mg daily) or no valsartan. Using Cox models, we evaluated the effect of valsartan on diabetes risk over a median of 4 years of follow-up and calculated the mediation effect of serum potassium as the difference in treatment hazard ratios from models excluding and including 1-year change in serum potassium. The 95% confidence interval (CI) for the difference in log hazard ratios was computed by bootstrapping. RESULTS The hazard ratio for developing diabetes among those on valsartan vs. no valsartan was 0.866 (95% CI = 0.795–0.943) vs. 0.868 (95% CI = 0.797–0.945), after controlling for 1-year change in potassium. The bootstrap 95% CI for a difference in these log hazard ratios was not statistically significant (−0.003 to 0.009). CONCLUSIONS Serum potassium does not appear to significantly mediate the protective effect of valsartan on diabetes risk. PMID:23417031

  17. Effect of Maternal–Child Home Visitation on Pregnancy Spacing for First-Time Latina Mothers

    PubMed Central

    Chesnokova, Arina; Matone, Meredith; Luan, Xianqun; Localio, A. Russell; Rubin, David M.

    2014-01-01

    Objectives. We examined the impact of a maternal–child home visitation program on birth spacing for first-time Latina mothers, focusing on adolescents and women who identified as Mexican or Puerto Rican. Methods. This was a retrospective cohort study. One thousand Latina women enrolled in the Pennsylvania Nurse–Family Partnership between January 1, 2003, and December 31, 2007, were matched to nonenrolled Latina women using propensity scores. The primary outcome was the time to second pregnancy that resulted in a live birth (interpregnancy interval). Proportional hazards models and bootstrap methods compared the time to event. Results. Home visitation was associated with a small decrease in the risk of a short interpregnancy interval (≤ 18 months) among Latina women (hazards ratio [HR] = 0.86; 95% confidence interval [CI] = 0.75, 0.99). This effect was driven by outcomes among younger adolescent women (HR = 0.80; 95% CI = 0.65, 0.96). There was also a trend toward significance for women of Mexican heritage (HR = 0.74; 95% CI = 0.49, 1.07), although this effect might be attributed to individual agency performance. Conclusions. Home visitation using the Nurse–Family Partnership model had measurable effects on birth spacing in Latina women. PMID:24354820

  18. Effect of maternal-child home visitation on pregnancy spacing for first-time Latina mothers.

    PubMed

    Yun, Katherine; Chesnokova, Arina; Matone, Meredith; Luan, Xianqun; Localio, A Russell; Rubin, David M

    2014-02-01

    We examined the impact of a maternal-child home visitation program on birth spacing for first-time Latina mothers, focusing on adolescents and women who identified as Mexican or Puerto Rican. This was a retrospective cohort study. One thousand Latina women enrolled in the Pennsylvania Nurse-Family Partnership between January 1, 2003, and December 31, 2007, were matched to nonenrolled Latina women using propensity scores. The primary outcome was the time to second pregnancy that resulted in a live birth (interpregnancy interval). Proportional hazards models and bootstrap methods compared the time to event. Home visitation was associated with a small decrease in the risk of a short interpregnancy interval (≤ 18 months) among Latina women (hazards ratio [HR] = 0.86; 95% confidence interval [CI] = 0.75, 0.99). This effect was driven by outcomes among younger adolescent women (HR = 0.80; 95% CI = 0.65, 0.96). There was also a trend toward significance for women of Mexican heritage (HR = 0.74; 95% CI = 0.49, 1.07), although this effect might be attributed to individual agency performance. Home visitation using the Nurse-Family Partnership model had measurable effects on birth spacing in Latina women.

  19. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    PubMed

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  20. Capital market based warning indicators of bank runs

    NASA Astrophysics Data System (ADS)

    Vakhtina, Elena; Wosnitza, Jan Henrik

    2015-01-01

    In this investigation, we examine the univariate as well as the multivariate capabilities of the log-periodic [super-exponential] power law (LPPL) for the prediction of bank runs. The research is built upon daily CDS spreads of 40 international banks for the period from June 2007 to March 2010, i.e. at the heart of the global financial crisis. For this time period, 20 of the financial institutions received federal bailouts and are labeled as defaults while the remaining institutions are categorized as non-defaults. The employed multivariate pattern recognition approach represents a modification of the CORA3 algorithm. The approach is found to be robust regardless of reasonable changes of its inputs. Despite the fact that distinct alarm indices for banks do not clearly demonstrate predictive capabilities of the LPPL, the synchronized alarm indices confirm the multivariate discriminative power of LPPL patterns in CDS spread developments acknowledged by bootstrap intervals with 70% confidence level.

  1. [The analysis of threshold effect using Empower Stats software].

    PubMed

    Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan

    2013-11-01

    In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.

  2. Impact of life stories on college students' positive and negative attitudes toward older adults.

    PubMed

    Yamashita, Takashi; Hahn, Sarah J; Kinney, Jennifer M; Poon, Leonard W

    2017-03-28

    Gerontological educators are increasingly interested in reducing college students' negative, and promoting their positive, attitudes toward older adults. Over the course of a semester, students from six 4-year institutions viewed three life story videos (documentaries) of older adults and completed pre- and posttest surveys that assessed their positive (Allophilia Scale) and negative (Fraboni Scale of Ageism) attitudes. The authors assessed changes in attitudinal scales between treatment (with videos, n = 80) and control (no video, n = 40) groups. Change score analysis with 95% bias-corrected bootstrap confidence intervals estimated the effects of the documentaries on students' attitudes. The treatment group showed significant increases in kinship, engagement, and enthusiasm, and decreases in antilocution and avoidance (all ps <.05). There was no significant change in affect, comfort, or discrimination. This study demonstrated how video stories impact students' attitudes about older adults.

  3. Two SPSS programs for interpreting multiple regression results.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo

    2010-02-01

    When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.

  4. Quantile rank maps: a new tool for understanding individual brain development.

    PubMed

    Chen, Huaihou; Kelly, Clare; Castellanos, F Xavier; He, Ye; Zuo, Xi-Nian; Reiss, Philip T

    2015-05-01

    We propose a novel method for neurodevelopmental brain mapping that displays how an individual's values for a quantity of interest compare with age-specific norms. By estimating smoothly age-varying distributions at a set of brain regions of interest, we derive age-dependent region-wise quantile ranks for a given individual, which can be presented in the form of a brain map. Such quantile rank maps could potentially be used for clinical screening. Bootstrap-based confidence intervals are proposed for the quantile rank estimates. We also propose a recalibrated Kolmogorov-Smirnov test for detecting group differences in the age-varying distribution. This test is shown to be more robust to model misspecification than a linear regression-based test. The proposed methods are applied to brain imaging data from the Nathan Kline Institute Rockland Sample and from the Autism Brain Imaging Data Exchange (ABIDE) sample. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Analysis of spreadable cheese by Raman spectroscopy and chemometric tools.

    PubMed

    Oliveira, Kamila de Sá; Callegaro, Layce de Souza; Stephani, Rodrigo; Almeida, Mariana Ramos; de Oliveira, Luiz Fernando Cappa

    2016-03-01

    In this work, FT-Raman spectroscopy was explored to evaluate spreadable cheese samples. A partial least squares discriminant analysis was employed to identify the spreadable cheese samples containing starch. To build the models, two types of samples were used: commercial samples and samples manufactured in local industries. The method of supervised classification PLS-DA was employed to classify the samples as adulterated or without starch. Multivariate regression was performed using the partial least squares method to quantify the starch in the spreadable cheese. The limit of detection obtained for the model was 0.34% (w/w) and the limit of quantification was 1.14% (w/w). The reliability of the models was evaluated by determining the confidence interval, which was calculated using the bootstrap re-sampling technique. The results show that the classification models can be used to complement classical analysis and as screening methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Comparing Indirect Effects in Different Groups in Single-Group and Multi-Group Structural Equation Models

    PubMed Central

    Ryu, Ehri; Cheong, Jeewon

    2017-01-01

    In this article, we evaluated the performance of statistical methods in single-group and multi-group analysis approaches for testing group difference in indirect effects and for testing simple indirect effects in each group. We also investigated whether the performance of the methods in the single-group approach was affected when the assumption of equal variance was not satisfied. The assumption was critical for the performance of the two methods in the single-group analysis: the method using a product term for testing the group difference in a single path coefficient, and the Wald test for testing the group difference in the indirect effect. Bootstrap confidence intervals in the single-group approach and all methods in the multi-group approach were not affected by the violation of the assumption. We compared the performance of the methods and provided recommendations. PMID:28553248

  7. Relationships between work environment factors and presenteeism mediated by employees' health: a preliminary study.

    PubMed

    McGregor, Alisha; Iverson, Donald; Caputi, Peter; Magee, Christopher; Ashbury, Fred

    2014-12-01

    This study investigates a research framework for presenteeism, in particular, whether work environment factors are indirectly related to presenteeism via employees' health. A total of 336 employees, 107 from a manufacturing company in Europe and 229 from various locations across North America, completed a self-report survey, which measured the association between presenteeism (dependent variable) and several health and work environment factors (independent variables). These relationships were tested using path analysis with bootstrapping in Mplus. Presenteeism was directly related to health burden (r = 0.77; P = 0.00) and work environment burden (r = 0.34; P = 0.00). The relationship between work environment burden and presenteeism was partially mediated by health burden (β = 0.08; 95% confidence interval, 0.002 to 0.16). These findings suggest both a direct and an indirect relationship between work environment factors and presenteeism at work.

  8. The experimental design approach to eluotropic strength of 20 solvents in thin-layer chromatography on silica gel.

    PubMed

    Komsta, Łukasz; Stępkowska, Barbara; Skibiński, Robert

    2017-02-03

    The eluotropic strength on thin-layer silica plates was investigated for 20 chromatographic grade solvents available in current market. 35 model compounds were used as test subjects in the investigation. The use of modern mixture screening design allowed to estimate each solvent as a separate elution coefficient with an acceptable error of estimation (0.0913 of R M value). Additional bootstrapping technique was used to check the distribution and uncertainty of eluotropic estimates, proving very similar confidence intervals to linear regression. Principal component analysis proved that the only one parameter (mean eluotropic strength) is satisfactory to describe the solvent property, as it explains almost 90% of variance of retention. The obtained eluotropic data can be good appendix to earlier published results and their values can be interpreted in context of R M differences. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The experimental design approach to eluotropic strength of 20 solvents in thin-layer chromatography on silica gel.

    PubMed

    Komsta, Łukasz; Stępkowska, Barbara; Skibiński, Robert

    2017-01-04

    The eluotropic strength on thin-layer silica plates was investigated for 20 chromatographic grade solvents available in current market. 35 model compounds were used as test subjects in the investigation. The use of modern mixture screening design allowed to estimate each solvent as a separate elution coefficient with an acceptable error of estimation (0.0913 of R M value). Additional bootstrapping technique was used to check the distribution and uncertainty of eluotropic estimates, proving very similar confidence intervals to linear regression. Principal component analysis proved that the only one parameter (mean eluotropic strength) is satisfactory to describe the solvent property, as it explains almost 90% of variance of retention. The obtained eluotropic data can be good appendix to earlier published results and their values can be interpreted in context of R M differences. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.

    PubMed

    Harrington, Peter de Boves

    2018-01-02

    Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.

  11. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  12. Does behavioral bootstrapping boost weight control confidence?: a pilot study.

    PubMed

    Rohrer, James E; Vickers-Douglas, Kristin S; Stroebel, Robert J

    2008-04-01

    Since confidence is an important predictor of ability to lose weight, methods for increasing weight-control confidence are important. The purpose of this study was to test the relationship between short-term behavior changes ('behavioral bootstrapping') and change in weight-control confidence in a small prospective weight-loss project. Data were available from 38 patients who received an initial motivational interview and a follow-up visit. Body mass index at baseline ranged from 25.5 kg/m to 50.4 kg/m (mean = 35.8, median = 34.4). Independent variables were change in weight (measured in kilograms in the clinic), self-reported change in minutes of physical activity, age, sex, and marital status. Minutes of physical activity were assessed at baseline and after 30 days, using the following question, "How many minutes do you exercise per week (e.g. fast walking, biking, treadmill)?" Weights were measured in the clinic. Weight change was inversely correlated with change in confidence (p = 0.01). An increase in physical activity was associated with an increase in confidence (p = 0.01). Age, sex, and marital status were not related to change in confidence. Independent effects of weight change and physical activity were estimated using multiple linear regression analysis: b = -0.44, p = 0.04 for change in weight, and b = 0.02, p = 0.03 for change in physical activity (r = 0.28). Short-term changes in behavior (losing weight and exercising more) lead to increased weight-control confidence in primary-care patients.

  13. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  14. Compatibility of household budget and individual nutrition surveys: results of the preliminary analysis.

    PubMed

    Naska, A; Trichopoulou, A

    2001-08-01

    The EU-supported project entitled: "Compatibility of household budget and individual nutrition surveys and disparities in food habits" aimed at comparing individualised household budget survey (HBS) data with food consumption values derived from individual nutrition surveys (INS). The present paper provides a brief description of the methodology applied for rendering the datasets at a comparable level. Results of the preliminary evaluation of their compatibility are also presented. A non parametric modelling approach was used for the individualisation (age and gender-specific) of the food data collected at household level, in the context of the national HBSs and the bootstrap technique was used for the derivation of 95% confidence intervals. For each food group, INS and HBS-derived mean values were calculated for twenty-four research units, jointly defined by country (four countries involved), gender (male, female) and age (younger, middle-aged and older). Pearson correlation coefficients were calculated. The results of this preliminary analysis show that there is considerable scope in the nutritional information derived from HBSs. Additional and more sophisticated work is however required, putting particular emphasis on addressing limitations present in both surveys and on deriving reliable individual consumption point and interval estimates, on the basis of HBS data.

  15. Statistic and dosimetric criteria to assess the shift of the prescribed dose for lung radiotherapy plans when integrating point kernel models in medical physics: are we ready?

    PubMed

    Chaikh, Abdulhamid; Balosso, Jacques

    2016-12-01

    To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).

  16. Performance of Bootstrap MCEWMA: Study case of Sukuk Musyarakah data

    NASA Astrophysics Data System (ADS)

    Safiih, L. Muhamad; Hila, Z. Nurul

    2014-07-01

    Sukuk Musyarakah is one of several instruments of Islamic bond investment in Malaysia, where the form of this sukuk is actually based on restructuring the conventional bond to become a Syariah compliant bond. The Syariah compliant is based on prohibition of any influence of usury, benefit or fixed return. Despite of prohibition, daily returns of sukuk are non-fixed return and in statistic, the data of sukuk returns are said to be a time series data which is dependent and autocorrelation distributed. This kind of data is a crucial problem whether in statistical and financing field. Returns of sukuk can be statistically viewed by its volatility, whether it has high volatility that describing the dramatically change of price and categorized it as risky bond or else. However, this crucial problem doesn't get serious attention among researcher compared to conventional bond. In this study, MCEWMA chart in Statistical Process Control (SPC) is mainly used to monitor autocorrelated data and its application on daily returns of securities investment data has gained widespread attention among statistician. However, this chart has always been influence by inaccurate estimation, whether on base model or its limit, due to produce large error and high of probability of signalling out-of-control process for false alarm study. To overcome this problem, a bootstrap approach used in this study, by hybridise it on MCEWMA base model to construct a new chart, i.e. Bootstrap MCEWMA (BMCEWMA) chart. The hybrid model, BMCEWMA, will be applied to daily returns of sukuk Musyarakah for Rantau Abang Capital Bhd. The performance of BMCEWMA base model showed that its more effective compare to real model, MCEWMA based on smaller error estimation, shorter the confidence interval and smaller false alarm. In other word, hybrid chart reduce the variability which shown by smaller error and false alarm. It concludes that the application of BMCEWMA is better than MCEWMA.

  17. Parameter uncertainty and nonstationarity in regional extreme rainfall frequency analysis in Qu River Basin, East China

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Xu, Y. P.; Gu, H.

    2014-12-01

    Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.

  18. Sevelamer is cost effective versus calcium carbonate for the first-line treatment of hyperphosphatemia in new patients to hemodialysis: a patient-level economic evaluation of the INDEPENDENT-HD study.

    PubMed

    Ruggeri, Matteo; Bellasi, Antonio; Cipriani, Filippo; Molony, Donald; Bell, Cynthia; Russo, Domenico; Di Iorio, Biagio

    2015-10-01

    The recent multicenter, randomized, open-label INDEPENDENT study demonstrated that sevelamer improves survival in new to hemodialysis (HD) patients compared with calcium carbonate. The objective of this study was to determine the cost-effectiveness of sevelamer versus calcium carbonate for patients new to HD, using patient-level data from the INDEPENDENT study. Cost-effectiveness analysis. Adult patients new to HD in Italy. A patient-level cost-effectiveness analysis was conducted from the perspective of the Servizio Sanitario Nazionale, Italy's national health service. The analysis was conducted for a 3-year time horizon. The cost of dialysis was excluded from the base case analysis. Sevelamer was compared to calcium carbonate. Total life years (LYs), total costs, and the incremental cost per LY gained were calculated. Bootstrapping was used to estimate confidence intervals around LYs, costs, and cost-effectiveness and to calculate the cost-effectiveness acceptability curve. Sevelamer was associated with a gain of 0.26 in LYs compared to calcium carbonate, over the 3-year time horizon. Total drug costs were €3,282 higher for sevelamer versus calcium carbonate, while total hospitalization costs were €2,020 lower for sevelamer versus calcium carbonate. The total incremental cost of sevelamer versus calcium carbonate was €1,262, resulting in a cost per LY gained of €4,897. The bootstrap analysis demonstrated that sevelamer was cost effective compared with calcium carbonate in 99.4 % of 10,000 bootstrap replicates, assuming a willingness-to-pay threshold of €20,000 per LY gained. Data on hospitalizations was taken from a post hoc retrospective chart review of the patients included in the INDEPENDENT study. Patient quality of life or health utility was not included in the analysis. Sevelamer is a cost-effective alternative to calcium carbonate for the first-line treatment of hyperphosphatemia in new to HD patients in Italy.

  19. A Web-based nomogram predicting para-aortic nodal metastasis in incompletely staged patients with endometrial cancer: a Korean Multicenter Study.

    PubMed

    Kang, Sokbom; Lee, Jong-Min; Lee, Jae-Kwan; Kim, Jae-Weon; Cho, Chi-Heum; Kim, Seok-Mo; Park, Sang-Yoon; Park, Chan-Yong; Kim, Ki-Tae

    2014-03-01

    The purpose of this study is to develop a Web-based nomogram for predicting the individualized risk of para-aortic nodal metastasis in incompletely staged patients with endometrial cancer. From 8 institutions, the medical records of 397 patients who underwent pelvic and para-aortic lymphadenectomy as a surgical staging procedure were retrospectively reviewed. A multivariate logistic regression model was created and internally validated by rigorous bootstrap resampling methods. Finally, the model was transformed into a user-friendly Web-based nomogram (http://http://www.kgog.org/nomogram/empa001.html). The rate of para-aortic nodal metastasis was 14.4% (57/397 patients). Using a stepwise variable selection, 4 variables including deep myometrial invasion, non-endometrioid subtype, lymphovascular space invasion, and log-transformed CA-125 levels were finally adopted. After 1000 repetitions of bootstrapping, all of these 4 variables retained a significant association with para-aortic nodal metastasis in the multivariate analysis-deep myometrial invasion (P = 0.001), non-endometrioid histologic subtype (P = 0.034), lymphovascular space invasion (P = 0.003), and log-transformed serum CA-125 levels (P = 0.004). The model showed good discrimination (C statistics = 0.87; 95% confidence interval, 0.82-0.92) and accurate calibration (Hosmer-Lemeshow P = 0.74). This nomogram showed good performance in predicting para-aortic metastasis in patients with endometrial cancer. The tool may be useful in determining the extent of lymphadenectomy after incomplete surgery.

  20. Assessing uncertainties in surface water security: An empirical multimodel approach

    NASA Astrophysics Data System (ADS)

    Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.

    2015-11-01

    Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.

  1. Assessing participation in community-based physical activity programs in Brazil.

    PubMed

    Reis, Rodrigo S; Yan, Yan; Parra, Diana C; Brownson, Ross C

    2014-01-01

    This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14-4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16-2.53), reporting a good health (OR = 1.58, 95% CI = 1.02-2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05-2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26-2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18-2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil.

  2. Estimation of the standardized risk difference and ratio in a competing risks framework: application to injection drug use and progression to AIDS after initiation of antiretroviral therapy.

    PubMed

    Cole, Stephen R; Lau, Bryan; Eron, Joseph J; Brookhart, M Alan; Kitahata, Mari M; Martin, Jeffrey N; Mathews, William C; Mugavero, Michael J

    2015-02-15

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Pain catastrophizing mediates the relationship between self-reported strenuous exercise involvement and pain ratings: moderating role of anxiety sensitivity.

    PubMed

    Goodin, Burel R; McGuire, Lynanne M; Stapleton, Laura M; Quinn, Noel B; Fabian, Lacy A; Haythornthwaite, Jennifer A; Edwards, Robert R

    2009-11-01

    To investigate the cross-sectional associations among self-reported weekly strenuous exercise bouts, anxiety sensitivity, and their interaction with pain catastrophizing and pain responses to the cold pressor task (CPT) in healthy, ethnically diverse young adults (n = 79). Exercise involvement has been shown to have hypoalgesic effects and cognitive factors may partially explain this effect. Particularly, alterations in pain catastrophizing have been found to mediate the positive pain outcomes of multidisciplinary treatments incorporating exercise. Further, recent evidence suggests that exercise involvement and anxiety sensitivity may act together, as interacting factors, to exert an effect on catastrophizing and pain outcomes; however, further research is needed to clarify the nature of this interaction. Before the CPT, participants were asked to complete the Godin Leisure-Time Exercise Questionnaire, the Beck Depression Inventory, and the Anxiety Sensitivity Index. After the CPT, participants completed a modified version of the Pain Catastrophizing Scale and the Short Form-McGill Pain Questionnaire. At a high level of anxiety sensitivity, controlling for depressive symptoms, CPT immersion time, and sex differences, a bias-corrected (BC), bootstrapped confidence interval revealed that pain catastrophizing significantly mediated the relationship between self-reported weekly strenuous exercise bouts and pain response (95% BC Confidence Interval = -9.558, -0.800 with 1000 resamples). At intermediate and low levels of anxiety sensitivity, no significant mediation effects were found. These findings support that, for pain catastrophizing to mediate the strenuous exercise-pain response relation, individuals must possess a high level of anxiety sensitivity.

  4. Association Between Internalized HIV-Related Stigma and HIV Care Visit Adherence.

    PubMed

    Rice, Whitney S; Crockett, Kaylee B; Mugavero, Michael J; Raper, James L; Atkins, Ghislaine C; Turan, Bulent

    2017-12-15

    Internalized HIV-related stigma acts as a barrier to antiretroviral therapy (ART) adherence, but its effects on other HIV care continuum outcomes are unclear. Among 196 HIV clinic patients in Birmingham, AL, we assessed internalized HIV-related stigma and depressive symptom severity using validated multi-item scales and assessed ART adherence using a validated single-item measure. HIV visit adherence (attended out of total scheduled visits) was calculated using data from clinic records. Using covariate-adjusted regression analysis, we investigated the association between internalized stigma and visit adherence. Using path analytic methods with bootstrapping, we tested the mediating role of depressive symptoms in the association between internalized stigma and visit adherence and the mediating role of visit adherence in the association between internalized stigma and ART adherence. Higher internalized stigma was associated with lower visit adherence (B = -0.04, P = 0.04). Black (versus white) race and depressive symptoms were other significant predictors within this model. Mediation analysis yielded no indirect effect through depression in the association between internalized stigma and visit adherence (B = -0.18, SE = 0.11, 95% confidence interval: -0.44 to -0.02) in the whole sample. Supplemental mediated moderation analyses revealed gender-specific effects. Additionally, the effect of internalized stigma on suboptimal ART adherence was mediated by lower visit adherence (B = -0.18, SE = 0.11, 95% confidence interval: -0.44 to -0.02). Results highlight the importance of internalized HIV stigma to multiple and sequential HIV care continuum outcomes. Also, findings suggest multiple intervention targets, including addressing internalized stigma directly, reducing depressive symptoms, and promoting consistent engagement in care.

  5. Parents' willingness to pay for biologic treatments in juvenile idiopathic arthritis.

    PubMed

    Burnett, Heather F; Ungar, Wendy J; Regier, Dean A; Feldman, Brian M; Miller, Fiona A

    2014-12-01

    Biologic therapies are considered the standard of care for children with the most severe forms of juvenile idiopathic arthritis (JIA). Inconsistent and inadequate drug coverage, however, prevents many children from receiving timely and equitable access to the best treatment. The objective of this study was to evaluate parents' willingness to pay (WTP) for biologic and nonbiologic disease-modifying antirheumatic drugs (DMARDs) used to treat JIA. Utility weights from a discrete choice experiment were used to estimate the WTP for treatment characteristics including child-reported pain, participation in daily activities, side effects, days missed from school, drug treatment, and cost. Conditional logit regression was used to estimate utilities for each attribute level, and expected compensating variation was used to estimate the WTP. Bootstrapping was used to generate 95% confidence intervals for all WTP estimates. Parents had the highest marginal WTP for improved participation in daily activities and pain relief followed by the elimination of side effects of treatment. Parents were willing to pay $2080 (95% confidence interval $698-$4065) more for biologic DMARDs than for nonbiologic DMARDs if the biologic DMARD was more effective. Parents' WTP indicates their preference for treatments that reduce pain and improve daily functioning without side effects by estimating the monetary equivalent of utility for drug treatments in JIA. In addition to evidence of safety and efficacy, assessments of parents' preferences provide a broader perspective to decision makers by helping them understand the aspects of drug treatments in JIA that are most valued by families. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Two-condition within-participant statistical mediation analysis: A path-analytic framework.

    PubMed

    Montoya, Amanda K; Hayes, Andrew F

    2017-03-01

    Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Wavelet method for CT colonography computer-aided polyp detection.

    PubMed

    Li, Jiang; Van Uitert, Robert; Yao, Jianhua; Petrick, Nicholas; Franaszek, Marek; Huang, Adam; Summers, Ronald M

    2008-08-01

    Computed tomographic colonography (CTC) computer aided detection (CAD) is a new method to detect colon polyps. Colonic polyps are abnormal growths that may become cancerous. Detection and removal of colonic polyps, particularly larger ones, has been shown to reduce the incidence of colorectal cancer. While high sensitivities and low false positive rates are consistently achieved for the detection of polyps sized 1 cm or larger, lower sensitivities and higher false positive rates occur when the goal of CAD is to identify "medium"-sized polyps, 6-9 mm in diameter. Such medium-sized polyps may be important for clinical patient management. We have developed a wavelet-based postprocessor to reduce false positives for this polyp size range. We applied the wavelet-based postprocessor to CTC CAD findings from 44 patients in whom 45 polyps with sizes of 6-9 mm were found at segmentally unblinded optical colonoscopy and visible on retrospective review of the CT colonography images. Prior to the application of the wavelet-based postprocessor, the CTC CAD system detected 33 of the polyps (sensitivity 73.33%) with 12.4 false positives per patient, a sensitivity comparable to that of expert radiologists. Fourfold cross validation with 5000 bootstraps showed that the wavelet-based postprocessor could reduce the false positives by 56.61% (p <0.001), to 5.38 per patient (95% confidence interval [4.41, 6.34]), without significant sensitivity degradation (32/45, 71.11%, 95% confidence interval [66.39%, 75.74%], p=0.1713). We conclude that this wavelet-based postprocessor can substantially reduce the false positive rate of our CTC CAD for this important polyp size range.

  8. Duration of extracorporeal membrane oxygenation support and survival in cardiovascular surgery patients.

    PubMed

    Distelmaier, Klaus; Wiedemann, Dominik; Binder, Christina; Haberl, Thomas; Zimpfer, Daniel; Heinz, Gottfried; Koinig, Herbert; Felli, Alessia; Steinlechner, Barbara; Niessner, Alexander; Laufer, Günther; Lang, Irene M; Goliasch, Georg

    2018-06-01

    The overall therapeutic goal of venoarterial extracorporeal membrane oxygenation (ECMO) in patients with postcardiotomy shock is bridging to myocardial recovery. However, in patients with irreversible myocardial damage prolonged ECMO treatment would cause a delay or even withholding of further permanent potentially life-saving therapeutic options. We therefore assessed the prognostic effect of duration of ECMO support on survival in adult patients after cardiovascular surgery. We enrolled into our single-center registry a total of 354 patients who underwent venoarterial ECMO support after cardiovascular surgery at a university-affiliated tertiary care center. Through a median follow-up period of 45 months (interquartile range, 20-81 months), 245 patients (69%) died. We observed an increase in mortality with increasing duration of ECMO support. The association between increased duration of ECMO support and mortality persisted in patients who survived ECMO support with a crude hazard ratio of 1.96 (95% confidence interval, 1.40-2.74; P < .001) for 2-year mortality compared with the third tertile and the second tertile of ECMO duration. This effect was even more pronounced after multivariate adjustment using a bootstrap-selected confounder model with an adjusted hazard ratio of 2.30 (95% confidence interval, 1.52-3.48; P < .001) for 2-year long-term mortality. Prolonged venoarterial ECMO support is associated with poor outcome in adult patients after cardiovascular surgery. Our data suggest reevaluation of therapeutic strategies after 7 days of ECMO support because mortality disproportionally increases afterward. Copyright © 2018 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  9. Clustering of motor and nonmotor traits in leucine-rich repeat kinase 2 G2019S Parkinson's disease nonparkinsonian relatives: A multicenter family study.

    PubMed

    Mestre, Tiago A; Pont-Sunyer, Claustre; Kausar, Farah; Visanji, Naomi P; Ghate, Taneera; Connolly, Barbara S; Gasca-Salas, Carmen; Kern, Drew S; Jain, Jennifer; Slow, Elizabeth J; Faust-Socher, Achinoam; Kasten, Meike; Wadia, Pettarusp M; Zadikoff, Cindy; Kumar, Prakash; de Bie, Ronald M; Thomsen, Teri; Lang, Anthony E; Schüle, Birgitt; Klein, Christine; Tolosa, Eduardo; Marras, Connie

    2018-04-17

    The objective of this study was to determine phenotypic features that differentiate nonparkinsonian first-degree relatives of PD leucine-rich repeat kinase 2 (LRRK2) G2019S multiplex families, regardless of carrier status, from healthy controls because nonparkinsonian individuals in multiplex families seem to share a propensity to present neurological features. We included nonparkinsonian first-degree relatives of LRRK2 G2019S familial PD cases and unrelated healthy controls participating in established multiplex family LRRK2 cohorts. Study participants underwent neurologic assessment including cognitive screening, olfaction testing, and questionnaires for daytime sleepiness, depression, and anxiety. We used a multiple logistic regression model with backward variable selection, validated with bootstrap resampling, to establish the best combination of motor and nonmotor features that differentiates nonparkinsonian first-degree relatives of LRRK2 G2019S familial PD cases from unrelated healthy controls. We included 142 nonparkinsonian family members and 172 unrelated healthy controls. The combination of past or current symptoms of anxiety (adjusted odds ratio, 4.16; 95% confidence interval, 2.01-8.63), less daytime sleepiness (adjusted odds ratio [1 unit], 0.90; 95% confidence interval, 0.83-0.97], and worse motor UPDRS score (adjusted odds ratio [1 unit], 1.4; 95% confidence interval, 1.20-1.67) distinguished nonparkinsonian family members, regardless of LRRK2 G2019S mutation status, from unrelated healthy controls. The model accuracy was good (area under the curve = 79.3%). A set of motor and nonmotor features distinguishes first-degree relatives of LRRK2 G2019S probands, regardless of mutation status, from unrelated healthy controls. Environmental or non-LRRK2 genetic factors in LRRK2-associated PD may influence penetrance of the LRRK2 G2019S mutation. The relationship of these features to actual PD risk requires longitudinal observation of LRRK2 familial PD cohorts. © 2018 International Parkinson and Movement Disorder Society. © 2018 International Parkinson and Movement Disorder Society.

  10. BELM: Bayesian extreme learning machine.

    PubMed

    Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J

    2011-03-01

    The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.

  11. Satisfaction with Life Scale (SWLS) in Caregivers of Clinically-Referred Youth: Psychometric Properties and Mediation Analysis

    PubMed Central

    Athay, M. Michele

    2012-01-01

    This paper presents the psychometric evaluation of the Satisfaction with Life Scale (SWLS; Diener, Emmons, Larson & Griffen, 1985) used with a large sample (N = 610) of caregivers for youth receiving mental health services. Methods from classical test theory, factor analysis and item response theory are utilized. Additionally, this paper investigates whether caregiver strain mediates the effect of youth symptom severity on caregiver life satisfaction (N = 356). Bootstrapped confidence intervals are used to determine the significance of the mediated effects. Results indicate that the SWLS is a psychometrically sound instrument to be used with caregivers of clinically-referred youth. Mediation analyses found that the effect of youth symptom severity on caregiver life satisfaction is mediated by caregiver strain but that the mediation effect differs based on the type of youth symptoms. Caregiver strain is a partial mediator when externalizing symptoms are measured and a full mediator when internalizing symptoms are measured. Implications for future research and clinical practice are discussed. PMID:22407554

  12. PTSD symptoms and pain in Canadian military veterans: the mediating roles of anxiety, depression, and alcohol use.

    PubMed

    Irwin, Kara C; Konnert, Candace; Wong, May; O'Neill, Thomas A

    2014-04-01

    Symptoms of posttraumatic stress disorder (PTSD) and pain are often comorbid among veterans. The purpose of this study was to investigate to what extent symptoms of anxiety, depression, and alcohol use mediated the relationship between PTSD symptoms and pain among 113 treated male Canadian veterans. Measures of PTSD, pain, anxiety symptoms, depression symptoms, and alcohol use were collected as part of the initial assessment. The bootstrapped resampling analyses were consistent with the hypothesis of mediation for anxiety and depression, but not alcohol use. The confidence intervals did not include zero and the indirect effect of PTSD on pain through anxiety was .04, CI [.03, .07]. The indirect effect of PTSD on pain through depression was .04, CI [.02, .07]. These findings suggest that PTSD and pain symptoms among veterans may be related through the underlying symptoms of anxiety and depression, thus emphasizing the importance of targeting anxiety and depression symptoms when treating comorbid PTSD and pain patients. © 2014 International Society for Traumatic Stress Studies.

  13. Inadequacy of Conventional Grab Sampling for Remediation Decision-Making for Metal Contamination at Small-Arms Ranges.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.

  14. An index of effluent aquatic toxicity designed by partial least squares regression, using acute and chronic tests and expert judgements.

    PubMed

    Vindimian, Éric; Garric, Jeanne; Flammarion, Patrick; Thybaud, Éric; Babut, Marc

    1999-10-01

    The evaluation of the ecotoxicity of effluents requires a battery of biological tests on several species. In order to derive a summary parameter from such a battery, a single endpoint was calculated for all the tests: the EC10, obtained by nonlinear regression, with bootstrap evaluation of the confidence intervals. Principal component analysis was used to characterize and visualize the correlation between the tests. The table of the toxicity of the effluents was then submitted to a panel of experts, who classified the effluents according to the test results. Partial least squares (PLS) regression was used to fit the average value of the experts' judgements to the toxicity data, using a simple equation. Furthermore, PLS regression on partial data sets and other considerations resulted in an optimum battery, with two chronic tests and one acute test. The index is intended to be used for the classification of effluents based on their toxicity to aquatic species. Copyright © 1999 SETAC.

  15. An index of effluent aquatic toxicity designed by partial least squares regression, using acute and chronic tests and expert judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vindimian, E.; Garric, J.; Flammarion, P.

    1999-10-01

    The evaluation of the ecotoxicity of effluents requires a battery of biological tests on several species. In order to derive a summary parameter from such a battery, a single endpoint was calculated for all the tests: the EC10, obtained by nonlinear regression, with bootstrap evaluation of the confidence intervals. Principal component analysis was used to characterize and visualize the correlation between the tests. The table of the toxicity of the effluents was then submitted to a panel of experts, who classified the effluents according to the test results. Partial least squares (PLS) regression was used to fit the average valuemore » of the experts' judgments to the toxicity data, using a simple equation. Furthermore, PLS regression on partial data sets and other considerations resulted in an optimum battery, with two chronic tests and one acute test. The index is intended to be used for the classification of effluents based on their toxicity to aquatic species.« less

  16. Comprehensive health assessments for adults with intellectual disability living in the community - weighing up the costs and benefits.

    PubMed

    Gordon, Louisa G; Holden, Libby; Ware, Robert S; Taylor, Miriam T; Lennox, Nicholas G

    2012-12-01

    Health assessments have beneficial effects on health outcomes for people with intellectual disability living in the community. However, the effect on medical costs is unknown. We utilised Medicare Australia data on consultations, procedures and prescription drugs (including vaccinations) from all participants in a randomised control trial during 2002-03 that examined the effectiveness of a health assessment. Government health costs for adults with intellectual disability who did or did not receive an assessment were compared. Bootstrapping statistics (95% confidence interval) were employed to handle the right-skewed cost data. Over 12 months, patients receiving health assessments incurred total costs of $4523 (95% CI: $3521 to $5525) similar to those in usual care $4466 (95% CI: $3283 to $5649). Costs were not significantly higher compared with the 12 month pre-intervention period. Health assessments for adults with intellectual disability living in the community are encouraged as they produce enhanced patient care but do not increase overall consultation or medication costs.

  17. Usual energy intake mediates the relationship between food reinforcement and BMI.

    PubMed

    Epstein, Leonard H; Carr, Katelyn A; Lin, Henry; Fletcher, Kelly D; Roemmich, James N

    2012-09-01

    The relative reinforcing value of food (RRV(food)) is positively associated with energy consumed and overweight status. One hypothesis relating these variables is that food reinforcement is related to BMI through usual energy intake. Using a sample of two hundred fifty-two adults of varying weight and BMI levels, results showed that usual energy intake mediated the relationship between RRV(food) and BMI (estimated indirect effect = 0.0027, bootstrapped 95% confidence intervals (CIs) 0.0002-0.0068, effect ratio = 0.34), controlling for age, sex, minority status, education, and reinforcing value of reading (RRV(reading)). Laboratory and usual energy intake were correlated (r = 0.24, P < 0.001), indicating that laboratory energy intake could provide an index of eating behavior in the natural environment. The mediational relationship observed suggests that increasing or decreasing food reinforcement could influence body weight by altering food consumption. Research is needed to develop methods of modifying RRV(food) to determine experimentally whether manipulating food reinforcement would result in changes in body weight.

  18. Modeling T-cell activation using gene expression profiling and state-space models.

    PubMed

    Rangel, Claudia; Angus, John; Ghahramani, Zoubin; Lioumi, Maria; Sotheran, Elizabeth; Gaiba, Alessia; Wild, David L; Falciani, Francesco

    2004-06-12

    We have used state-space models to reverse engineer transcriptional networks from highly replicated gene expression profiling time series data obtained from a well-established model of T-cell activation. State space models are a class of dynamic Bayesian networks that assume that the observed measurements depend on some hidden state variables that evolve according to Markovian dynamics. These hidden variables can capture effects that cannot be measured in a gene expression profiling experiment, e.g. genes that have not been included in the microarray, levels of regulatory proteins, the effects of messenger RNA and protein degradation, etc. Bootstrap confidence intervals are developed for parameters representing 'gene-gene' interactions over time. Our models represent the dynamics of T-cell activation and provide a methodology for the development of rational and experimentally testable hypotheses. Supplementary data and Matlab computer source code will be made available on the web at the URL given below. http://public.kgi.edu/~wild/LDS/index.htm

  19. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  20. A Comparison of Three Tests of Mediation

    ERIC Educational Resources Information Center

    Warbasse, Rosalia E.

    2009-01-01

    A simulation study was conducted to evaluate the performance of three tests of mediation: the bias-corrected and accelerated bootstrap (Efron & Tibshirani, 1993), the asymmetric confidence limits test (MacKinnon, 2008), and a multiple regression approach described by Kenny, Kashy, and Bolger (1998). The evolution of these methods is reviewed and…

  1. The role of sleep problems in the relationship between peer victimization and antisocial behavior: A five-year longitudinal study.

    PubMed

    Chang, Ling-Yin; Wu, Wen-Chi; Wu, Chi-Chen; Lin, Linen Nymphas; Yen, Lee-Lan; Chang, Hsing-Yi

    2017-01-01

    Peer victimization in children and adolescents is a serious public health concern. Growing evidence exists for negative consequences of peer victimization, but research has mostly been short term and little is known about the mechanisms that moderate and mediate the impacts of peer victimization on subsequent antisocial behavior. The current study intended to examine the longitudinal relationship between peer victimization in adolescence and antisocial behavior in young adulthood and to determine whether sleep problems influence this relationship. In total, 2006 adolescents participated in a prospective study from 2009 to 2013. The moderating role of sleep problems was examined by testing the significance of the interaction between peer victimization and sleep problems. The mediating role of sleep problems was tested by using bootstrapping mediational analyses. All analyses were conducted using SAS 9.3 software. We found that peer victimization during adolescence was positively and significantly associated with antisocial behavior in young adulthood (β = 0.10, p < 0.0001). This association was mediated, but not moderated by sleep problems. Specifically, peer victimization first increased levels of sleep problems, which in turn elevated the risk of antisocial behavior (indirect effect: 0.01, 95% bootstrap confidence interval: 0.004, 0.021). These findings imply that sleep problems may operate as a potential mechanism through which peer victimization during adolescence leads to increases in antisocial behavior in young adulthood. Prevention and intervention programs that target sleep problems may yield benefits for decreasing antisocial behavior in adolescents who have been victimized by peers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Assessing Participation in Community-Based Physical Activity Programs in Brazil

    PubMed Central

    REIS, RODRIGO S.; YAN, YAN; PARRA, DIANA C.; BROWNSON, ROSS C.

    2015-01-01

    Purpose This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. Methods We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. Results The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14–4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16–2.53), reporting a good health (OR = 1.58, 95% CI = 1.02–2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05–2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26–2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18–2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Conclusions Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil. PMID:23846162

  3. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  4. Risk of cutaneous adverse events with febuxostat treatment in patients with skin reaction to allopurinol. A retrospective, hospital-based study of 101 patients with consecutive allopurinol and febuxostat treatment.

    PubMed

    Bardin, Thomas; Chalès, Gérard; Pascart, Tristan; Flipo, René-Marc; Korng Ea, Hang; Roujeau, Jean-Claude; Delayen, Aurélie; Clerson, Pierre

    2016-05-01

    To investigate the cutaneous tolerance of febuxostat in gouty patients with skin intolerance to allopurinol. We identified all gouty patients who had sequentially received allopurinol and febuxostat in the rheumatology departments of 4 university hospitals in France and collected data from hospital files using a predefined protocol. Patients who had not visited the prescribing physician during at least 2 months after febuxostat prescription were excluded. The odds ratio (OR) for skin reaction to febuxostat in patients with a cutaneous reaction to allopurinol versus no reaction was calculated. For estimating the 95% confidence interval (95% CI), we used the usual Wald method and a bootstrap method. In total, 113 gouty patients had sequentially received allopurinol and febuxostat; 12 did not visit the prescribing physician after febuxostat prescription and were excluded. Among 101 patients (86 males, mean age 61±13.9 years), 2/22 (9.1%) with a history of cutaneous reactions to allopurinol showed skin reactions to febuxostat. Two of 79 patients (2.5%) without a skin reaction to allopurinol showed skin intolerance to febuxostat. The ORs were not statistically significant with the usual Wald method (3.85 [95% CI 0.51-29.04]) or bootstrap method (3.86 [95% CI 0.80-18.74]). The risk of skin reaction with febuxostat seems moderately increased in patients with a history of cutaneous adverse events with allopurinol. This moderate increase does not support the cross-reactivity of the two drugs. Copyright © 2015. Published by Elsevier SAS.

  5. Parametric modelling of cost data in medical studies.

    PubMed

    Nixon, R M; Thompson, S G

    2004-04-30

    The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.

  6. BATEMANATER: a computer program to estimate and bootstrap mating system variables based on Bateman's principles.

    PubMed

    Jones, Adam G

    2015-11-01

    Bateman's principles continue to play a major role in the characterization of genetic mating systems in natural populations. The modern manifestations of Bateman's ideas include the opportunity for sexual selection (i.e. I(s) - the variance in relative mating success), the opportunity for selection (i.e. I - the variance in relative reproductive success) and the Bateman gradient (i.e. β(ss) - the slope of the least-squares regression of reproductive success on mating success). These variables serve as the foundation for one convenient approach for the quantification of mating systems. However, their estimation presents at least two challenges, which I address here with a new Windows-based computer software package called BATEMANATER. The first challenge is that confidence intervals for these variables are not easy to calculate. BATEMANATER solves this problem using a bootstrapping approach. The second, more serious, problem is that direct estimates of mating system variables from open populations will typically be biased if some potential progeny or adults are missing from the analysed sample. BATEMANATER addresses this problem using a maximum-likelihood approach to estimate mating system variables from incompletely sampled breeding populations. The current version of BATEMANATER addresses the problem for systems in which progeny can be collected in groups of half- or full-siblings, as would occur when eggs are laid in discrete masses or offspring occur in pregnant females. BATEMANATER has a user-friendly graphical interface and thus represents a new, convenient tool for the characterization and comparison of genetic mating systems. © 2015 John Wiley & Sons Ltd.

  7. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  8. Bayesian characterization of uncertainty in species interaction strengths.

    PubMed

    Wolf, Christopher; Novak, Mark; Gitelman, Alix I

    2017-06-01

    Considerable effort has been devoted to the estimation of species interaction strengths. This effort has focused primarily on statistical significance testing and obtaining point estimates of parameters that contribute to interaction strength magnitudes, leaving the characterization of uncertainty associated with those estimates unconsidered. We consider a means of characterizing the uncertainty of a generalist predator's interaction strengths by formulating an observational method for estimating a predator's prey-specific per capita attack rates as a Bayesian statistical model. This formulation permits the explicit incorporation of multiple sources of uncertainty. A key insight is the informative nature of several so-called non-informative priors that have been used in modeling the sparse data typical of predator feeding surveys. We introduce to ecology a new neutral prior and provide evidence for its superior performance. We use a case study to consider the attack rates in a New Zealand intertidal whelk predator, and we illustrate not only that Bayesian point estimates can be made to correspond with those obtained by frequentist approaches, but also that estimation uncertainty as described by 95% intervals is more useful and biologically realistic using the Bayesian method. In particular, unlike in bootstrap confidence intervals, the lower bounds of the Bayesian posterior intervals for attack rates do not include zero when a predator-prey interaction is in fact observed. We conclude that the Bayesian framework provides a straightforward, probabilistic characterization of interaction strength uncertainty, enabling future considerations of both the deterministic and stochastic drivers of interaction strength and their impact on food webs.

  9. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  10. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  11. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.

  12. Higher education delays and shortens cognitive impairment: a multistate life table analysis of the US Health and Retirement Study.

    PubMed

    Reuser, Mieke; Willekens, Frans J; Bonneux, Luc

    2011-05-01

    Improved health may extend or shorten the duration of cognitive impairment by postponing incidence or death. We assess the duration of cognitive impairment in the US Health and Retirement Study (1992-2004) by self reported BMI, smoking and levels of education in men and women and three ethnic groups. We define multistate life tables by the transition rates to cognitive impairment, recovery and death and estimate Cox proportional hazard ratios for the studied determinants. 95% confidence intervals are obtained by bootstrapping. 55 year old white men and women expect to live 25.4 and 30.0 years, of which 1.7 [95% confidence intervals 1.5; 1.9] years and 2.7 [2.4; 2.9] years with cognitive impairment. Both black men and women live 3.7 [2.9; 4.5] years longer with cognitive impairment than whites, Hispanic men and women 3.2 [1.9; 4.6] and 5.8 [4.2; 7.5] years. BMI makes no difference. Smoking decreases the duration of cognitive impairment with 0.8 [0.4; 1.3] years by high mortality. Highly educated men and women live longer, but 1.6 years [1.1; 2.2] and 1.9 years [1.6; 2.6] shorter with cognitive impairment than lowly educated men and women. The effect of education is more pronounced among ethnic minorities. Higher life expectancy goes together with a longer period of cognitive impairment, but not for higher levels of education: that extends life in good cognitive health but shortens the period of cognitive impairment. The increased duration of cognitive impairment in minority ethnic groups needs further study, also in Europe.

  13. Associations of serum adiponectin with skeletal muscle morphology and insulin sensitivity.

    PubMed

    Ingelsson, Erik; Arnlöv, Johan; Zethelius, Björn; Vasan, Ramachandran S; Flyvbjerg, Allan; Frystyk, Jan; Berne, Christian; Hänni, Arvo; Lind, Lars; Sundström, Johan

    2009-03-01

    Skeletal muscle morphology and function are strongly associated with insulin sensitivity. The objective of the study was to test the hypothesis that circulating adiponectin is associated with skeletal muscle morphology and that adiponectin mediates the relation of muscle morphology to insulin sensitivity. This was a cross-sectional investigation of 461 men aged 71 yr, participants of the community-based Uppsala Longitudinal Study of Adult Men study. Measures included serum adiponectin, insulin sensitivity measured with euglycemic insulin clamp technique, and capillary density and muscle fiber composition determined from vastus lateralis muscle biopsies. In multivariable linear regression models (adjusting for age, physical activity, fasting glucose, and pharmacological treatment for diabetes), serum adiponectin levels rose with increasing capillary density (beta, 0.30 per 50 capillaries per square millimeter increase; P = 0.041) and higher proportion of type I muscle fibers (beta, 0.27 per 10% increase; P = 0.036) but declined with a higher proportion of type IIb fibers (beta, -0.39 per 10% increase; P = 0.014). Using bootstrap methods to examine the potential role of adiponectin in associations between muscle morphology and insulin sensitivity and the associations of capillary density (beta difference, 0.041; 95% confidence interval 0.001, 0.085) and proportion of type IIb muscle fibers (beta difference, -0.053; 95% confidence interval -0.107, -0.002) with insulin sensitivity were significantly attenuated when adiponectin was included in the models. Circulating adiponectin concentrations were higher with increasing skeletal muscle capillary density and in individuals with higher proportion of slow oxidative muscle fibers. Furthermore, our results indicate that adiponectin could be a partial mediator of the relations between skeletal muscle morphology and insulin sensitivity.

  14. The Aristotle Comprehensive Complexity score predicts mortality and morbidity after congenital heart surgery.

    PubMed

    Bojan, Mirela; Gerelli, Sébastien; Gioanni, Simone; Pouard, Philippe; Vouhé, Pascal

    2011-04-01

    The Aristotle Comprehensive Complexity (ACC) score has been proposed for complexity adjustment in the analysis of outcome after congenital heart surgery. The score is the sum of the Aristotle Basic Complexity score, largely used but poorly related to mortality and morbidity, and of the Comprehensive Complexity items accounting for comorbidities and procedure-specific and anatomic variability. This study aims to demonstrate the ability of the ACC score to predict 30-day mortality and morbidity assessed by the length of the intensive care unit (ICU) stay. We retrospectively enrolled patients undergoing congenital heart surgery in our institution. We modeled the ACC score as a continuous variable, mortality as a binary variable, and length of ICU stay as a censored variable. For each mortality and morbidity model we performed internal validation by bootstrapping and assessed overall performance by R(2), calibration by the calibration slope, and discrimination by the c index. Among all 1,454 patients enrolled, 30-day mortality rate was 3.4% and median length of ICU stay was 3 days. The ACC score strongly related to mortality, but related to length of ICU stay only during the first postoperative week. For the mortality model, R(2) = 0.24, calibration slope = 0.98, c index = 0.86, and 95% confidence interval was 0.82 to 0.91. For the morbidity model, R(2) = 0.094, calibration slope = 0.94, c index = 0.64, and 95% confidence interval was 0.62 to 0.66. The ACC score predicts 30-day mortality and length of ICU stay during the first postoperative week. The score is an adequate tool for complexity adjustment in the analysis of outcome after congenital heart surgery. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Comparison of Background Parenchymal Enhancement at Contrast-enhanced Spectral Mammography and Breast MR Imaging.

    PubMed

    Sogani, Julie; Morris, Elizabeth A; Kaplan, Jennifer B; D'Alessio, Donna; Goldman, Debra; Moskowitz, Chaya S; Jochelson, Maxine S

    2017-01-01

    Purpose To assess the extent of background parenchymal enhancement (BPE) at contrast material-enhanced (CE) spectral mammography and breast magnetic resonance (MR) imaging, to evaluate interreader agreement in BPE assessment, and to examine the relationships between clinical factors and BPE. Materials and Methods This was a retrospective, institutional review board-approved, HIPAA-compliant study. Two hundred seventy-eight women from 25 to 76 years of age with increased breast cancer risk who underwent CE spectral mammography and MR imaging for screening or staging from 2010 through 2014 were included. Three readers independently rated BPE on CE spectral mammographic and MR images with the ordinal scale: minimal, mild, moderate, or marked. To assess pairwise agreement between BPE levels on CE spectral mammographic and MR images and among readers, weighted κ coefficients with quadratic weights were calculated. For overall agreement, mean κ values and bootstrapped 95% confidence intervals were calculated. The univariate and multivariate associations between BPE and clinical factors were examined by using generalized estimating equations separately for CE spectral mammography and MR imaging. Results Most women had minimal or mild BPE at both CE spectral mammography (68%-76%) and MR imaging (69%-76%). Between CE spectral mammography and MR imaging, the intrareader agreement ranged from moderate to substantial (κ = 0.55-0.67). Overall agreement on BPE levels between CE spectral mammography and MR imaging and among readers was substantial (κ = 0.66; 95% confidence interval: 0.61, 0.70). With both modalities, BPE demonstrated significant association with menopausal status, prior breast radiation therapy, hormonal treatment, breast density on CE spectral mammographic images, and amount of fibroglandular tissue on MR images (P < .001 for all). Conclusion There was substantial agreement between readers for BPE detected on CE spectral mammographic and MR images. © RSNA, 2016.

  16. Community occupational therapy for older patients with dementia and their care givers: cost effectiveness study

    PubMed Central

    2008-01-01

    Objective To assess the cost effectiveness of community based occupational therapy compared with usual care in older patients with dementia and their care givers from a societal viewpoint. Design Cost effectiveness study alongside a single blind randomised controlled trial. Setting Memory clinic, day clinic of a geriatrics department, and participants’ homes. Patients 135 patients aged ≥65 with mild to moderate dementia living in the community and their primary care givers. Intervention 10 sessions of occupational therapy over five weeks, including cognitive and behavioural interventions, to train patients in the use of aids to compensate for cognitive decline and care givers in coping behaviours and supervision. Main outcome measures Incremental cost effectiveness ratio expressed as the difference in mean total care costs per successful treatment (that is, a combined patient and care giver outcome measure of clinically relevant improvement on process, performance, and competence scales) at three months after randomisation. Bootstrap methods used to determine confidence intervals for these measures. Results The intervention cost €1183 (£848, $1738) (95% confidence interval €1128 (£808, $1657) to €1239 (£888, $1820)) per patient and primary care giver unit at three months. Visits to general practitioners and hospital doctors cost the same in both groups but total mean costs were €1748 (£1279, $2621) lower in the intervention group, with the main cost savings in informal care. There was a significant difference in proportions of successful treatments of 36% at three months. The number needed to treat for successful treatment at three months was 2.8 (2.7 to 2.9). Conclusions Community occupational therapy intervention for patients with dementia and their care givers is successful and cost effective, especially in terms of informal care giving. PMID:18171718

  17. The Economic Burden of Visual Impairment and Comorbid Fatigue: A Cost-of-Illness Study (From a Societal Perspective).

    PubMed

    Schakel, Wouter; van der Aa, Hilde P A; Bode, Christina; Hulshof, Carel T J; van Rens, Ger H M B; van Nispen, Ruth M A

    2018-04-01

    To investigate the burden of visual impairment and comorbid fatigue in terms of impact on daily life, by estimating societal costs (direct medical costs and indirect non-health care costs) accrued by these conditions. This cost-of-illness study was performed from a societal perspective. Cross-sectional data of visually impaired adults and normally sighted adults were collected through structured telephone interviews and online surveys, respectively. Primary outcomes were fatigue severity (FAS), impact of fatigue on daily life (MFIS), and total societal costs. Cost differences between participants with and without vision loss, and between participants with and without fatigue, were examined by (adjusted) multivariate regression analyses, including bootstrapped confidence intervals. Severe fatigue (FAS ≥ 22) and high fatigue impact (MFIS ≥ 38) was present in 57% and 40% of participants with vision loss (n = 247), respectively, compared to 22% (adjusted odds ratio [OR] 4.6; 95% confidence interval [CI] [2.7, 7.6]) and 11% (adjusted OR 4.8; 95% CI [2.7, 8.7]) in those with normal sight (n = 233). A significant interaction was found between visual impairment and high fatigue impact for total societal costs (€449; 95% CI [33, 1017]). High fatigue impact was associated with significantly increased societal costs for participants with visual impairment (mean difference €461; 95% CI [126, 797]), but this effect was not observed for participants with normal sight (€12; 95% CI [-527, 550]). Visual impairment is associated with an increased prevalence of high fatigue impact that largely determines the economic burden of visual impairment. The substantial costs of visual impairment and comorbid fatigue emphasize the need for patient-centered interventions aimed at decreasing its impact.

  18. Relationship between plethysmographic waveform changes and hemodynamic variables in anesthetized, mechanically ventilated patients undergoing continuous cardiac output monitoring.

    PubMed

    Thiele, Robert H; Colquhoun, Douglas A; Patrie, James; Nie, Sarah H; Huffmyer, Julie L

    2011-12-01

    To assess the relation between photoplethysmographically-derived parameters and invasively-determined hemodynamic variables. After induction of anesthesia and placement of a Swan-Ganz CCOmbo catheter, a Nonin OEM III probe was placed on each patient's earlobe. Photoplethysmographic signals were recorded in conjunction with cardiac output. Photoplethysmographic metrics (amplitude of absorbance waveform, maximal slope of absorbance waveform, area under the curve, and width) were calculated offline and compared with invasively determined hemodynamic variables. Subject-specific associations between each dependent and independent variable pair were summarized on a per-subject basis by the nonparametric Spearman rank correlation coefficient. The bias-corrected accelerated bootstrap resampling procedure of Efron and Tibshirani was used to obtain a 95% confidence interval for the median subject-specific correlation coefficient, and Wilcoxon sign-rank tests were conducted to test the null hypothesis that the median of the subject-specific correlation coefficients were equal to 0. University hospital. Eighteen patients undergoing coronary artery bypass graft surgery. Placement of a Swan-Ganz CCOmbo catheter and a Nonin OEM III pulse oximetry probe. There was a positive, statistically significant correlation between stroke volume and width (median correlation coefficient, 0.29; confidence interval, 0.01-0.46; p = 0.034). The concordance between changes in stroke volume and changes in width was 53%. No other correlations achieved statistical significance. This study was unable to reproduce the results of prior studies. Only stroke volume and photoplethysmographic width were correlated in this study; however, the correlation and concordance (based on analysis of a 4-quadrant plot) were too weak to be clinically useful. Future studies in patients undergoing low-to-moderate risk surgery may result in improved correlations and clinical utility. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  20. Comparison of Dissolution Similarity Assessment Methods for Products with Large Variations: f2 Statistics and Model-Independent Multivariate Confidence Region Procedure for Dissolution Profiles of Multiple Oral Products.

    PubMed

    Yoshida, Hiroyuki; Shibata, Hiroko; Izutsu, Ken-Ichi; Goda, Yukihiro

    2017-01-01

    The current Japanese Ministry of Health Labour and Welfare (MHLW)'s Guideline for Bioequivalence Studies of Generic Products uses averaged dissolution rates for the assessment of dissolution similarity between test and reference formulations. This study clarifies how the application of model-independent multivariate confidence region procedure (Method B), described in the European Medical Agency and U.S. Food and Drug Administration guidelines, affects similarity outcomes obtained empirically from dissolution profiles with large variations in individual dissolution rates. Sixty-one datasets of dissolution profiles for immediate release, oral generic, and corresponding innovator products that showed large variation in individual dissolution rates in generic products were assessed on their similarity by using the f 2 statistics defined in the MHLW guidelines (MHLW f 2 method) and two different Method B procedures, including a bootstrap method applied with f 2 statistics (BS method) and a multivariate analysis method using the Mahalanobis distance (MV method). The MHLW f 2 and BS methods provided similar dissolution similarities between reference and generic products. Although a small difference in the similarity assessment may be due to the decrease in the lower confidence interval for expected f 2 values derived from the large variation in individual dissolution rates, the MV method provided results different from those obtained through MHLW f 2 and BS methods. Analysis of actual dissolution data for products with large individual variations would provide valuable information towards an enhanced understanding of these methods and their possible incorporation in the MHLW guidelines.

  1. Human immunophenotyping via low-variance, low-bias, interpretive regression modeling of small, wide data sets: Application to aging and immune response to influenza vaccination.

    PubMed

    Holmes, Tyson H; He, Xiao-Song

    2016-10-01

    Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n,1

  2. Human Immunophenotyping via Low-Variance, Low-Bias, Interpretive Regression Modeling of Small, Wide Data Sets: Application to Aging and Immune Response to Influenza Vaccination

    PubMed Central

    Holmes, Tyson H.; He, Xiao-Song

    2016-01-01

    Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n, 1 < n < 50, of human participants for the purpose of estimating many parameters p, such that n < p < 1,000. We offer a set of prescriptions that are designed to facilitate low-variance (i.e. stable), low-bias, interpretive regression modeling of small, wide data sets. These prescriptions are distinctive in their especially heavy emphasis on minimizing use of out-of-sample information for conducting statistical inference. That allows the working immunologist to proceed without being encumbered by imposed and often untestable statistical assumptions. Problems of unmeasured confounders, confidence-interval coverage, feature selection, and shrinkage/denoising are defined clearly and treated in detail. We propose an extension of an existing nonparametric technique for improved small-sample confidence-interval tail coverage from the univariate case (single immune feature) to the multivariate (many, possibly correlated immune features). An important role for derived features in the immunological interpretation of regression analyses is stressed. Areas of further research are discussed. Presented principles and methods are illustrated through application to a small, wide data set of adults spanning a wide range in ages and multiple immunophenotypes that were assayed before and after immunization with inactivated influenza vaccine (IIV). Our regression modeling prescriptions identify some potentially important topics for future immunological research. 1) Immunologists may wish to distinguish age-related differences in immune features from changes in immune features caused by aging. 2) A form of the bootstrap that employs linear extrapolation may prove to be an invaluable analytic tool because it allows the working immunologist to obtain accurate estimates of the stability of immune parameter estimates with a bare minimum of imposed assumptions. 3) Liberal inclusion of immune features in phenotyping panels can facilitate accurate separation of biological signal of interest from noise. In addition, through a combination of denoising and potentially improved confidence interval coverage, we identify some candidate immune correlates (frequency of cell subset and concentration of cytokine) with B cell response as measured by quantity of IIV-specific IgA antibody-secreting cells and quantity of IIV-specific IgG antibody-secreting cells. PMID:27196789

  3. The retest distribution of the visual field summary index mean deviation is close to normal.

    PubMed

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  4. Reducing the width of confidence intervals for the difference between two population means by inverting adaptive tests.

    PubMed

    O'Gorman, Thomas W

    2018-05-01

    In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.

  5. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, Susan L.; Liu, H. Helen; Wang, Shulian

    Purpose: The aim of this study was to investigate the effect of radiation dose distribution in the lung on the risk of postoperative pulmonary complications among esophageal cancer patients. Methods and Materials: We analyzed data from 110 patients with esophageal cancer treated with concurrent chemoradiotherapy followed by surgery at our institution from 1998 to 2003. The endpoint for analysis was postsurgical pneumonia or acute respiratory distress syndrome. Dose-volume histograms (DVHs) and dose-mass histograms (DMHs) for the whole lung were used to fit normal-tissue complication probability (NTCP) models, and the quality of fits were compared using bootstrap analysis. Results: Normal-tissue complicationmore » probability modeling identified that the risk of postoperative pulmonary complications was most significantly associated with small absolute volumes of lung spared from doses {>=}5 Gy (VS5), that is, exposed to doses <5 Gy. However, bootstrap analysis found no significant difference between the quality of this model and fits based on other dosimetric parameters, including mean lung dose, effective dose, and relative volume of lung receiving {>=}5 Gy, probably because of correlations among these factors. The choice of DVH vs. DMH or the use of fractionation correction did not significantly affect the results of the NTCP modeling. The parameter values estimated for the Lyman NTCP model were as follows (with 95% confidence intervals in parentheses): n = 1.85 (0.04, {infinity}), m = 0.55 (0.22, 1.02), and D {sub 5} = 17.5 Gy (9.4 Gy, 102 Gy). Conclusions: In this cohort of esophageal cancer patients, several dosimetric parameters including mean lung dose, effective dose, and absolute volume of lung receiving <5 Gy provided similar descriptions of the risk of postoperative pulmonary complications as a function of Radiation dose distribution in the lung.« less

  7. Economic evaluation of a psychological intervention for high distress cancer patients and carers: costs and quality-adjusted life years.

    PubMed

    Chatterton, Mary Lou; Chambers, Suzanne; Occhipinti, Stefano; Girgis, Afaf; Dunn, Jeffrey; Carter, Rob; Shih, Sophy; Mihalopoulos, Cathrine

    2016-07-01

    This study compared the cost-effectiveness of a psychologist-led, individualised cognitive behavioural intervention (PI) to a nurse-led, minimal contact self-management condition for highly distressed cancer patients and carers. This was an economic evaluation conducted alongside a randomised trial of highly distressed adult cancer patients and carers calling cancer helplines. Services used by participants were measured using a resource use questionnaire, and quality-adjusted life years were measured using the assessment of quality of life - eight-dimension - instrument collected through a computer-assisted telephone interview. The base case analysis stratified participants based on the baseline score on the Brief Symptom Inventory. Incremental cost-effectiveness ratio confidence intervals were calculated with a nonparametric bootstrap to reflect sampling uncertainty. The results were subjected to sensitivity analysis by varying unit costs for resource use and the method for handling missing data. No significant differences were found in overall total costs or quality-adjusted life years (QALYs) between intervention groups. Bootstrapped data suggest the PI had a higher probability of lower cost and greater QALYs for both carers and patients with high distress at baseline. For patients with low levels of distress at baseline, the PI had a higher probability of greater QALYs but at additional cost. Sensitivity analysis showed the results were robust. The PI may be cost-effective compared with the nurse-led, minimal contact self-management condition for highly distressed cancer patients and carers. More intensive psychological intervention for patients with greater levels of distress appears warranted. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Evaluation of wound healing in diabetic foot ulcer using platelet-rich plasma gel: A single-arm clinical trial.

    PubMed

    Mohammadi, Mohammad Hossein; Molavi, Behnam; Mohammadi, Saeed; Nikbakht, Mohsen; Mohammadi, Ashraf Malek; Mostafaei, Shayan; Norooznezhad, Amir Hossein; Ghorbani Abdegah, Ali; Ghavamzadeh, Ardeshir

    2017-04-01

    The aim of the present study was to evaluate the effectiveness of using autologous platelet-rich plasma (PRP) gel for treatment of diabetic foot ulcer (DFU) during the first 4 weeks of the treatment. In this longitudinal and single-arm trial, 100 patients were randomly selected after meeting certain inclusion and exclusion criteria; of these 100 patients, 70 (70%) were enrolled in the trial. After the primary care actions such as wound debridement, the area of each wound was calculated and recorded. The PRP therapy (2mL/cm 2 of ulcers) was performed weekly until the healing time for each patient. We used one sample T-test for healing wounds and Bootstrap resampling approach for reporting confidence interval with 1000 Bootstrap samples. The p-value<0.05 were considered statistically significant. The mean (SD) of DFU duration was 19.71 weeks (4.94) for units sampling. The ratio of subjects who withdrew from the study was calculated to be 2 (2.8%). Average area of 71 ulcers in the mentioned number of cases was calculated to be 6.11cm 2 (SD: 4.37). Also, the mean, median (SD) of healing time was 8.7, 8 weeks (SD: 3.93) except for 2 mentioned cases. According to one sample T-test, wound area (cm 2 ), on average, significantly decreased to 51.9% (CI: 46.7-57.1) through the first four weeks of therapy. Furthermore, significant correlation (0.22) was not found between area of ulcers and healing duration (p-value>0.5). According to the results, PRP could be considered as a candidate treatment for non-healing DFUs as it may prevent future complications such as amputation or death in this pathological phenomenon. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Development of a prognostic nomogram for cirrhotic patients with upper gastrointestinal bleeding.

    PubMed

    Zhou, Yu-Jie; Zheng, Ji-Na; Zhou, Yi-Fan; Han, Yi-Jing; Zou, Tian-Tian; Liu, Wen-Yue; Braddock, Martin; Shi, Ke-Qing; Wang, Xiao-Dong; Zheng, Ming-Hua

    2017-10-01

    Upper gastrointestinal bleeding (UGIB) is a complication with a high mortality rate in critically ill patients presenting with cirrhosis. Today, there exist few accurate scoring models specifically designed for mortality risk assessment in critically ill cirrhotic patients with upper gastrointestinal bleeding (CICGIB). Our aim was to develop and evaluate a novel nomogram-based model specific for CICGIB. Overall, 540 consecutive CICGIB patients were enrolled. On the basis of Cox regression analyses, the nomogram was constructed to estimate the probability of 30-day, 90-day, 270-day, and 1-year survival. An upper gastrointestinal bleeding-chronic liver failure-sequential organ failure assessment (UGIB-CLIF-SOFA) score was derived from the nomogram. Performance assessment and internal validation of the model were performed using Harrell's concordance index (C-index), calibration plot, and bootstrap sample procedures. UGIB-CLIF-SOFA was also compared with other prognostic models, such as CLIF-SOFA and model for end-stage liver disease, using C-indices. Eight independent factors derived from Cox analysis (including bilirubin, creatinine, international normalized ratio, sodium, albumin, mean artery pressure, vasopressin used, and hematocrit decrease>10%) were assembled into the nomogram and the UGIB-CLIF-SOFA score. The calibration plots showed optimal agreement between nomogram prediction and actual observation. The C-index of the nomogram using bootstrap (0.729; 95% confidence interval: 0.689-0.766) was higher than that of the other models for predicting survival of CICGIB. We have developed and internally validated a novel nomogram and an easy-to-use scoring system that accurately predicts the mortality probability of CICGIB on the basis of eight easy-to-obtain parameters. External validation is now warranted in future clinical studies.

  10. Identifying Emergency Department Patients at Low Risk for a Variceal Source of Upper Gastrointestinal Hemorrhage.

    PubMed

    Klein, Lauren R; Money, Joel; Maharaj, Kaveesh; Robinson, Aaron; Lai, Tarissa; Driver, Brian E

    2017-11-01

    Assessing the likelihood of a variceal versus nonvariceal source of upper gastrointestinal bleeding (UGIB) guides therapy, but can be difficult to determine on clinical grounds. The objective of this study was to determine if there are easily ascertainable clinical and laboratory findings that can identify a patient as low risk for a variceal source of hemorrhage. This was a retrospective cohort study of adult ED patients with UGIB between January 2008 and December 2014 who had upper endoscopy performed during hospitalization. Clinical and laboratory data were abstracted from the medical record. The source of the UGIB was defined as variceal or nonvariceal based on endoscopic reports. Binary recursive partitioning was utilized to create a clinical decision rule. The rule was internally validated and test characteristics were calculated with 1,000 bootstrap replications. A total of 719 patients were identified; mean age was 55 years and 61% were male. There were 71 (10%) patients with a variceal UGIB identified on endoscopy. Binary recursive partitioning yielded a two-step decision rule (platelet count > 200 × 10 9 /L and an international normalized ratio [INR] < 1.3), which identified patients who were low risk for a variceal source of hemorrhage. For the bootstrapped samples, the rule performed with 97% sensitivity (95% confidence interval [CI] = 91%-100%) and 49% specificity (95% CI = 44%-53%). Although this derivation study must be externally validated before widespread use, patients presenting to the ED with an acute UGIB with platelet count of >200 × 10 9 /L and an INR of <1.3 may be at very low risk for a variceal source of their upper gastrointestinal hemorrhage. © 2017 by the Society for Academic Emergency Medicine.

  11. Novel CPR system that predicts return of spontaneous circulation from amplitude spectral area before electric shock in ventricular fibrillation.

    PubMed

    Nakagawa, Yoshihide; Amino, Mari; Inokuchi, Sadaki; Hayashi, Satoshi; Wakabayashi, Tsutomu; Noda, Tatsuya

    2017-04-01

    Amplitude spectral area (AMSA), an index for analysing ventricular fibrillation (VF) waveforms, is thought to predict the return of spontaneous circulation (ROSC) after electric shocks, but its validity is unconfirmed. We developed an equation to predict ROSC, where the change in AMSA (ΔAMSA) is added to AMSA measured immediately before the first shock (AMSA1). We examine the validity of this equation by comparing it with the conventional AMSA1-only equation. We retrospectively investigated 285 VF patients given prehospital electric shocks by emergency medical services. ΔAMSA was calculated by subtracting AMSA1 from last AMSA immediately before the last prehospital electric shock. Multivariate logistic regression analysis was performed using post-shock ROSC as a dependent variable. Analysis data were subjected to receiver operating characteristic curve analysis, goodness-of-fit testing using a likelihood ratio test, and the bootstrap method. AMSA1 (odds ratio (OR) 1.151, 95% confidence interval (CI) 1.086-1.220) and ΔAMSA (OR 1.289, 95% CI 1.156-1.438) were independent factors influencing ROSC induction by electric shock. Area under the curve (AUC) for predicting ROSC was 0.851 for AMSA1-only and 0.891 for AMSA1+ΔAMSA. Compared with the AMSA1-only equation, the AMSA1+ΔAMSA equation had significantly better goodness-of-fit (likelihood ratio test P<0.001) and showed good fit in the bootstrap method. Post-shock ROSC was accurately predicted by adding ΔAMSA to AMSA1. AMSA-based ROSC prediction enables application of electric shock to only those patients with high probability of ROSC, instead of interrupting chest compressions and delivering unnecessary shocks to patients with low probability of ROSC. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The JAGUAR Score Predicts 1-Month Disability/Death in Ischemic Stroke Patient Ineligible for Recanalization Therapy.

    PubMed

    Widhi Nugroho, Aryandhito; Arima, Hisatomi; Takashima, Naoyuki; Fujii, Takako; Shitara, Satoshi; Miyamatsu, Naomi; Sugimoto, Yoshihisa; Nagata, Satoru; Komori, Masaru; Kita, Yoshikuni; Miura, Katsuyuki; Nozaki, Kazuhiko

    2018-06-22

    Most available scoring system to predict outcome after acute ischemic stroke (AIS) were established in Western countries. We aimed to develop a simple prediction score of 1-month severe disability/death after onset in AIS patients ineligible for recanalization therapy based on readily and widely obtainable on-admission clinical, laboratory and radiological examinations in Asian developing countries. Using the Shiga Stroke Registry, a large population-based registry in Japan, multivariable logistic regression analysis was conducted in 1617 AIS patients ineligible for recanalization therapy to yield ß-coefficients of significant predictors of 1-month modified Rankin Scale score of 5-6, which were then multiplied by a specific constant and rounded to nearest integer to develop 0-10 points system. Model discrimination and calibration were evaluated in the original and bootstrapped population. Japan Coma Scale score (J), age (A), random glucose (G), untimely onset-to-arrival time (U), atrial fibrillation (A), and preadmission dependency status according to the modified Rankin Scale score (R), were recognized as independent predictors of outcome. Each of their β-coefficients was multiplied by 1.3 creating the JAGUAR score. Its area under the curve (95% confidence interval) was .901 (.880- .922) and .901 (.900- .901) in the original and bootstrapped population, respectively. It was found to have good calibration in both study population (P = .27). The JAGUAR score can be an important prediction tool of severe disability/death in AIS patients ineligible for recanalization therapy that can be applied on admission with no complicated calculation and multimodal neuroimaging necessary, thus suitable for Asian developing countries. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  13. Creation of a model to predict survival in patients with refractory coeliac disease using a multinational registry.

    PubMed

    Rubio-Tapia, A; Malamut, G; Verbeek, W H M; van Wanrooij, R L J; Leffler, D A; Niveloni, S I; Arguelles-Grande, C; Lahr, B D; Zinsmeister, A R; Murray, J A; Kelly, C P; Bai, J C; Green, P H; Daum, S; Mulder, C J J; Cellier, C

    2016-10-01

    Refractory coeliac disease is a severe complication of coeliac disease with heterogeneous outcome. To create a prognostic model to estimate survival of patients with refractory coeliac disease. We evaluated predictors of 5-year mortality using Cox proportional hazards regression on subjects from a multinational registry. Bootstrap resampling was used to internally validate the individual factors and overall model performance. The mean of the estimated regression coefficients from 400 bootstrap models was used to derive a risk score for 5-year mortality. The multinational cohort was composed of 232 patients diagnosed with refractory coeliac disease across seven centres (range of 11-63 cases per centre). The median age was 53 years and 150 (64%) were women. A total of 51 subjects died during a 5-year follow-up (cumulative 5-year all-cause mortality = 30%). From a multiple variable Cox proportional hazards model, the following variables were significantly associated with 5-year mortality: age at refractory coeliac disease diagnosis (per 20 year increase, hazard ratio = 2.21; 95% confidence interval, CI: 1.38-3.55), abnormal intraepithelial lymphocytes (hazard ratio = 2.85; 95% CI: 1.22-6.62), and albumin (per 0.5 unit increase, hazard ratio = 0.72; 95% CI: 0.61-0.85). A simple weighted three-factor risk score was created to estimate 5-year survival. Using data from a multinational registry and previously reported risk factors, we create a prognostic model to predict 5-year mortality among patients with refractory coeliac disease. This new model may help clinicians to guide treatment and follow-up. © 2016 John Wiley & Sons Ltd.

  14. The effect of white matter hyperintensities on verbal memory: Mediation by temporal lobe atrophy.

    PubMed

    Swardfager, Walter; Cogo-Moreira, Hugo; Masellis, Mario; Ramirez, Joel; Herrmann, Nathan; Edwards, Jodi D; Saleem, Mahwesh; Chan, Parco; Yu, Di; Nestor, Sean M; Scott, Christopher J M; Holmes, Melissa F; Sahlas, Demetrios J; Kiss, Alexander; Oh, Paul I; Strother, Stephen C; Gao, Fuqiang; Stefanovic, Bojana; Keith, Julia; Symons, Sean; Swartz, Richard H; Lanctôt, Krista L; Stuss, Donald T; Black, Sandra E

    2018-02-20

    To determine the relationship between white matter hyperintensities (WMH) presumed to indicate disease of the cerebral small vessels, temporal lobe atrophy, and verbal memory deficits in Alzheimer disease (AD) and other dementias. We recruited groups of participants with and without AD, including strata with extensive WMH and minimal WMH, into a cross-sectional proof-of-principle study (n = 118). A consecutive case series from a memory clinic was used as an independent validation sample (n = 702; Sunnybrook Dementia Study; NCT01800214). We assessed WMH volume and left temporal lobe atrophy (measured as the brain parenchymal fraction) using structural MRI and verbal memory using the California Verbal Learning Test. Using path modeling with an inferential bootstrapping procedure, we tested an indirect effect of WMH on verbal recall that depends sequentially on temporal lobe atrophy and verbal learning. In both samples, WMH predicted poorer verbal recall, specifically due to temporal lobe atrophy and poorer verbal learning (proof-of-principle -1.53, 95% bootstrap confidence interval [CI] -2.45 to -0.88; and confirmation -0.66, 95% CI [-0.95 to -0.41] words). This pathway was significant in subgroups with (-0.20, 95% CI [-0.38 to -0.07] words, n = 363) and without (-0.71, 95% CI [-1.12 to -0.37] words, n = 339) AD. Via the identical pathway, WMH contributed to deficits in recognition memory (-1.82%, 95% CI [-2.64% to -1.11%]), a sensitive and specific sign of AD. Across dementia syndromes, WMH contribute indirectly to verbal memory deficits considered pathognomonic of Alzheimer disease, specifically by contributing to temporal lobe atrophy. © 2018 American Academy of Neurology.

  15. Identification of Dyslipidemic Patients Attending Primary Care Clinics Using Electronic Medical Record (EMR) Data from the Canadian Primary Care Sentinel Surveillance Network (CPCSSN) Database.

    PubMed

    Aref-Eshghi, Erfan; Oake, Justin; Godwin, Marshall; Aubrey-Bassler, Kris; Duke, Pauline; Mahdavian, Masoud; Asghari, Shabnam

    2017-03-01

    The objective of this study was to define the optimal algorithm to identify patients with dyslipidemia using electronic medical records (EMRs). EMRs of patients attending primary care clinics in St. John's, Newfoundland and Labrador (NL), Canada during 2009-2010, were studied to determine the best algorithm for identification of dyslipidemia. Six algorithms containing three components, dyslipidemia ICD coding, lipid lowering medication use, and abnormal laboratory lipid levels, were tested against a gold standard, defined as the existence of any of the three criteria. Linear discriminate analysis, and bootstrapping were performed following sensitivity/specificity testing and receiver's operating curve analysis. Two validating datasets, NL records of 2011-2014, and Canada-wide records of 2010-2012, were used to replicate the results. Relative to the gold standard, combining laboratory data together with lipid lowering medication consumption yielded the highest sensitivity (99.6%), NPV (98.1%), Kappa agreement (0.98), and area under the curve (AUC, 0.998). The linear discriminant analysis for this combination resulted in an error rate of 0.15 and an Eigenvalue of 1.99, and the bootstrapping led to AUC: 0.998, 95% confidence interval: 0.997-0.999, Kappa: 0.99. This algorithm in the first validating dataset yielded a sensitivity of 97%, Negative Predictive Value (NPV) = 83%, Kappa = 0.88, and AUC = 0.98. These figures for the second validating data set were 98%, 93%, 0.95, and 0.99, respectively. Combining laboratory data with lipid lowering medication consumption within the EMR is the best algorithm for detecting dyslipidemia. These results can generate standardized information systems for dyslipidemia and other chronic disease investigations using EMRs.

  16. Soft-tissue anatomy of the extant hominoids: a review and phylogenetic analysis.

    PubMed

    Gibbs, S; Collard, M; Wood, B

    2002-01-01

    This paper reports the results of a literature search for information about the soft-tissue anatomy of the extant non-human hominoid genera, Pan, Gorilla, Pongo and Hylobates, together with the results of a phylogenetic analysis of these data plus comparable data for Homo. Information on the four extant non-human hominoid genera was located for 240 out of the 1783 soft-tissue structures listed in the Nomina Anatomica. Numerically these data are biased so that information about some systems (e.g. muscles) and some regions (e.g. the forelimb) are over-represented, whereas other systems and regions (e.g. the veins and the lymphatics of the vascular system, the head region) are either under-represented or not represented at all. Screening to ensure that the data were suitable for use in a phylogenetic analysis reduced the number of eligible soft-tissue structures to 171. These data, together with comparable data for modern humans, were converted into discontinuous character states suitable for phylogenetic analysis and then used to construct a taxon-by-character matrix. This matrix was used in two tests of the hypothesis that soft-tissue characters can be relied upon to reconstruct hominoid phylogenetic relationships. In the first, parsimony analysis was used to identify cladograms requiring the smallest number of character state changes. In the second, the phylogenetic bootstrap was used to determine the confidence intervals of the most parsimonious clades. The parsimony analysis yielded a single most parsimonious cladogram that matched the molecular cladogram. Similarly the bootstrap analysis yielded clades that were compatible with the molecular cladogram; a (Homo, Pan) clade was supported by 95% of the replicates, and a (Gorilla, Pan, Homo) clade by 96%. These are the first hominoid morphological data to provide statistically significant support for the clades favoured by the molecular evidence.

  17. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  18. Can 3-dimensional power Doppler indices improve the prenatal diagnosis of a potentially morbidly adherent placenta in patients with placenta previa?

    PubMed

    Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J

    2017-08-01

    Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  20. Relationships among muscle dysmorphia characteristics, body image quality of life, and coping in males.

    PubMed

    Tod, D; Edwards, C

    2015-09-01

    The purpose of this study was to examine relationships among bodybuilding dependence, muscle satisfaction, body image-related quality of life and body image-related coping strategies, and test the hypothesis that muscle dysmorphia characteristics may predict quality of life via coping strategies. Participants (294 males, Mage=20.5 years, SD=3.1) participated in a cross-sectional survey. Participants completed questionnaires assessing muscle satisfaction, bodybuilding dependence, body image-related quality of life and body image-related coping. Quality of life was correlated positively with muscle satisfaction and bodybuilding dependence but negatively with body image coping (P<0.05). Body image coping was correlated positively with bodybuilding dependence and negatively with muscle satisfaction (P<0.05). Mediation analysis found that bodybuilding dependence and muscle satisfaction predicted quality of life both directly and indirectly via body image coping strategies (as evidenced by the bias corrected and accelerated bootstrapped confidence intervals). These results provide preliminary evidence regarding the ways that muscularity concerns might influence body image-related quality of life. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  1. Estimation of urban runoff and water quality using remote sensing and artificial intelligence.

    PubMed

    Ha, S R; Park, S Y; Park, D H

    2003-01-01

    Water quality and quantity of runoff are strongly dependent on the landuse and landcover (LULC) criteria. In this study, we developed a more improved parameter estimation procedure for the environmental model using remote sensing (RS) and artificial intelligence (AI) techniques. Landsat TM multi-band (7bands) and Korea Multi-Purpose Satellite (KOMPSAT) panchromatic data were selected for input data processing. We employed two kinds of artificial intelligence techniques, RBF-NN (radial-basis-function neural network) and ANN (artificial neural network), to classify LULC of the study area. A bootstrap resampling method, a statistical technique, was employed to generate the confidence intervals and distribution of the unit load. SWMM was used to simulate the urban runoff and water quality and applied to the study watershed. The condition of urban flow and non-point contaminations was simulated with rainfall-runoff and measured water quality data. The estimated total runoff, peak time, and pollutant generation varied considerably according to the classification accuracy and percentile unit load applied. The proposed procedure would efficiently be applied to water quality and runoff simulation in a rapidly changing urban area.

  2. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  3. SPSS and SAS procedures for estimating indirect effects in simple mediation models.

    PubMed

    Preacher, Kristopher J; Hayes, Andrew F

    2004-11-01

    Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.

  4. Small Vocabulary Recognition Using Surface Electromyography in an Acoustically Harsh Environment

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; Jorgensen, Charles

    2005-01-01

    This paper presents results of electromyographic-based (EMG-based) speech recognition on a small vocabulary of 15 English words. The work was motivated in part by a desire to mitigate the effects of high acoustic noise on speech intelligibility in communication systems used by first responders. Both an off-line and a real-time system were constructed. Data were collected from a single male subject wearing a fireghter's self-contained breathing apparatus. A single channel of EMG data was used, collected via surface sensors at a rate of 104 samples/s. The signal processing core consisted of an activity detector, a feature extractor, and a neural network classifier. In the off-line phase, 150 examples of each word were collected from the subject. Generalization testing, conducted using bootstrapping, produced an overall average correct classification rate on the 15 words of 74%, with a 95% confidence interval of [71%, 77%]. Once the classifier was trained, the subject used the real-time system to communicate and to control a robotic device. The real-time system was tested with the subject exposed to an ambient noise level of approximately 95 decibels.

  5. Testing a multiple mediator model of the effect of childhood sexual abuse on adolescent sexual victimization.

    PubMed

    Bramsen, Rikke H; Lasgaard, Mathias; Koss, Mary P; Shevlin, Mark; Elklit, Ask; Banner, Jytte

    2013-01-01

    The present study modeled the direct relationship between child sexual abuse (CSA) and adolescent peer-to-peer sexual victimization (APSV) and the mediated effect via variables representing the number of sexual partners, sexual risk behavior, and signaling sexual boundaries. A cross-sectional study on the effect of CSA on APSV was conducted, utilizing a multiple mediator model. Mediated and direct effects in the model were estimated employing Mplus using bootstrapped percentile based confidence intervals to test for significance of mediated effects. The study employed 327 Danish female adolescents with a mean age of 14.9 years (SD = 0.5). The estimates from the mediational model indicated full mediation of the effect of CSA on APSV via number of sexual partners and sexual risk behavior. The current study suggests that the link between CSA and APSV was mediated by sexual behaviors specifically pertaining to situations of social peer interaction, rather than directly on prior experiences of sexual victimization. The present study identifies a modifiable target area for intervention to reduce adolescent sexual revictimization. © 2013 American Orthopsychiatric Association.

  6. Interpersonal and intrapersonal factors as parallel independent mediators in the association between internalized HIV stigma and ART adherence

    PubMed Central

    Seghatol-Eslami, Victoria C.; Dark, Heather; Raper, James L.; Mugavero, Michael J.; Turan, Janet M.; Turan, Bulent

    2016-01-01

    Introduction People living with HIV (PLWH) need to adhere to antiretroviral therapy (ART) to achieve optimal health. One reason for ART non-adherence is HIV-related stigma. Objectives We aimed to examine whether HIV treatment self-efficacy (an intrapersonal mechanism) mediates the stigma – adherence association. We also examined whether self-efficacy and the concern about being seen while taking HIV medication (an interpersonal mechanism) are parallel mediators independent of each other. Methods 180 PLWH self-reported internalized HIV stigma, ART adherence, HIV treatment self-efficacy, and concerns about being seen while taking HIV medication. We calculated bias-corrected 95% confidence intervals (CIs) for indirect effects using bootstrapping to conduct mediation analyses. Results Adherence self-efficacy mediated the relationship between internalized stigma and ART adherence. Additionally, self-efficacy and concern about being seen while taking HIV medication uniquely mediated and explained almost all of the stigma – adherence association in independent paths (parallel mediation). Conclusion These results can inform intervention strategies to promote ART adherence. PMID:27926668

  7. Conducting Simulation Studies in the R Programming Environment.

    PubMed

    Hallgren, Kevin A

    2013-10-12

    Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.

  8. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  9. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling

    PubMed Central

    2006-01-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083

  10. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  11. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  12. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  13. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  14. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  15. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497

  16. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  17. Exact Scheffé-type confidence intervals for output from groundwater flow models: 2. Combined use of hydrogeologic information and calibration data

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.

  18. Myocardial perfusion magnetic resonance imaging using sliding-window conjugate-gradient highly constrained back-projection reconstruction for detection of coronary artery disease.

    PubMed

    Ma, Heng; Yang, Jun; Liu, Jing; Ge, Lan; An, Jing; Tang, Qing; Li, Han; Zhang, Yu; Chen, David; Wang, Yong; Liu, Jiabin; Liang, Zhigang; Lin, Kai; Jin, Lixin; Bi, Xiaoming; Li, Kuncheng; Li, Debiao

    2012-04-15

    Myocardial perfusion magnetic resonance imaging (MRI) with sliding-window conjugate-gradient highly constrained back-projection reconstruction (SW-CG-HYPR) allows whole left ventricular coverage, improved temporal and spatial resolution and signal/noise ratio, and reduced cardiac motion-related image artifacts. The accuracy of this technique for detecting coronary artery disease (CAD) has not been determined in a large number of patients. We prospectively evaluated the diagnostic performance of myocardial perfusion MRI with SW-CG-HYPR in patients with suspected CAD. A total of 50 consecutive patients who were scheduled for coronary angiography with suspected CAD underwent myocardial perfusion MRI with SW-CG-HYPR at 3.0 T. The perfusion defects were interpreted qualitatively by 2 blinded observers and were correlated with x-ray angiographic stenoses ≥50%. The prevalence of CAD was 56%. In the per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SW-CG-HYPR was 96% (95% confidence interval 82% to 100%), 82% (95% confidence interval 60% to 95%), 87% (95% confidence interval 70% to 96%), 95% (95% confidence interval 74% to100%), and 90% (95% confidence interval 82% to 98%), respectively. In the per-vessel analysis, the corresponding values were 98% (95% confidence interval 91% to 100%), 89% (95% confidence interval 80% to 94%), 86% (95% confidence interval 76% to 93%), 99% (95% confidence interval 93% to 100%), and 93% (95% confidence interval 89% to 97%), respectively. In conclusion, myocardial perfusion MRI using SW-CG-HYPR allows whole left ventricular coverage and high resolution and has high diagnostic accuracy in patients with suspected CAD. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  20. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    ERIC Educational Resources Information Center

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  1. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  2. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.

    PubMed

    Erdoğan, Semra; Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.

  3. Modified Confidence Intervals for the Mean of an Autoregressive Process.

    DTIC Science & Technology

    1985-08-01

    Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our

  4. Trends and racial and ethnic disparities in the prevalence of pregestational type 1 and type 2 diabetes in Northern California: 1996-2014.

    PubMed

    Peng, Tiffany Y; Ehrlich, Samantha F; Crites, Yvonne; Kitzmiller, John L; Kuzniewicz, Michael W; Hedderson, Monique M; Ferrara, Assiamira

    2017-02-01

    Despite concern for adverse perinatal outcomes in women with diabetes mellitus before pregnancy, recent data on the prevalence of pregestational type 1 and type 2 diabetes mellitus in the United States are lacking. The purpose of this study was to estimate changes in the prevalence of overall pregestational diabetes mellitus (all types) and pregestational type 1 and type 2 diabetes mellitus and to estimate whether changes varied by race-ethnicity from 1996-2014. We conducted a cohort study among 655,428 pregnancies at a Northern California integrated health delivery system from 1996-2014. Logistic regression analyses provided estimates of prevalence and trends. The age-adjusted prevalence (per 100 deliveries) of overall pregestational diabetes mellitus increased from 1996-1999 to 2012-2014 (from 0.58 [95% confidence interval, 0.54-0.63] to 1.06 [95% confidence interval, 1.00-1.12]; P trend <.0001). Significant increases occurred in all racial-ethnic groups; the largest relative increase was among Hispanic women (121.8% [95% confidence interval, 84.4-166.7]); the smallest relative increase was among non-Hispanic white women (49.6% [95% confidence interval, 27.5-75.4]). The age-adjusted prevalence of pregestational type 1 and type 2 diabetes mellitus increased from 0.14 (95% confidence interval, 0.12-0.16) to 0.23 (95% confidence interval, 0.21-0.27; P trend <.0001) and from 0.42 (95% confidence interval, 0.38-0.46) to 0.78 (95% confidence interval, 0.73-0.83; P trend <.0001), respectively. The greatest relative increase in the prevalence of type 1 diabetes mellitus was in non-Hispanic white women (118.4% [95% confidence interval, 70.0-180.5]), who had the lowest increases in the prevalence of type 2 diabetes mellitus (13.6% [95% confidence interval, -8.0 to 40.1]). The greatest relative increase in the prevalence of type 2 diabetes mellitus was in Hispanic women (125.2% [95% confidence interval, 84.8-174.4]), followed by African American women (102.0% [95% confidence interval, 38.3-194.3]) and Asian women (93.3% [95% confidence interval, 48.9-150.9]). The prevalence of overall pregestational diabetes mellitus and pregestational type 1 and type 2 diabetes mellitus increased from 1996-1999 to 2012-2014 and racial-ethnic disparities were observed, possibly because of differing prevalence of maternal obesity. Targeted prevention efforts, preconception care, and disease management strategies are needed to reduce the burden of diabetes mellitus and its sequelae. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review

    PubMed Central

    Kerr, Kathleen F.; Wang, Zheyu; Janes, Holly; McClelland, Robyn L.; Psaty, Bruce M.; Pepe, Margaret S.

    2014-01-01

    Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods rather than published variance formulas. The preferred single-number summary of the prediction increment is the improvement in net benefit. PMID:24240655

  6. The burden of moderate/severe premenstrual syndrome and premenstrual dysphoric disorder in a cohort of Latin American women.

    PubMed

    Schiola, Alexandre; Lowin, Julia; Lindemann, Marion; Patel, Renu; Endicott, Jean

    2011-01-01

    The aim of this study was to investigate the relationship between symptom severity, cost, and impairment in women with moderate/severe premenstrual syndrome (PMS) or premenstrual dysphoric disorder (PMDD) in a Latin American setting. A model was constructed based on analysis of an observational dataset. Data were included from four Latin American countries. Responder-level data were analysed according to four categories of symptom severity: Category 1 comprised Daily Record of Severity of Problems score 21 to 41.9, Category 2 score was 42 to 62.9, Category 3 score was 63 to 83.9, and Category 4 was a score of 84 or higher. Burden was estimated in terms of impact on job and activities using the modified work productivity and impairment questionnaire and affect on quality of life using the SF-12 questionnaire. Costs were estimated in Brazilian reals from a Brazilian private health care and societal perspective. The outputs of the analysis were estimates of burden, mean annual cost and affect on quality of life (as measured by quality adjusted life years) by symptom severity. Confidence intervals around key outcomes were generated through nonparametric bootstrapping. Analysis suggests a significant cost burden associated with moderate/severe PMS and PMDD with mean per patient annual costs estimated at 1618 BRL (95% confidence interval 957-2,481). Although the relationship between cost, quality of life, and severity was not clear, analysis showed a consistent relationship between disease severity and measures of disease burden (job and daily activity). Burden on activities increased with disease severity. Our analysis, conducted from a Latin American perspective, suggests a significant burden and an increasing impairment associated with moderate/severe PMS and PMDD. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Frequency domain analysis of errors in cross-correlations of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-12-01

    We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method is used to account for temporal correlation of noise cross-spectrum at low frequencies (0.05-0.2 Hz) near the ocean microseismic peaks.

  8. Comparison of Background Parenchymal Enhancement at Contrast-enhanced Spectral Mammography and Breast MR Imaging

    PubMed Central

    Morris, Elizabeth A.; Kaplan, Jennifer B.; D’Alessio, Donna; Goldman, Debra; Moskowitz, Chaya S.

    2017-01-01

    Purpose To assess the extent of background parenchymal enhancement (BPE) at contrast material–enhanced (CE) spectral mammography and breast magnetic resonance (MR) imaging, to evaluate interreader agreement in BPE assessment, and to examine the relationships between clinical factors and BPE. Materials and Methods This was a retrospective, institutional review board–approved, HIPAA-compliant study. Two hundred seventy-eight women from 25 to 76 years of age with increased breast cancer risk who underwent CE spectral mammography and MR imaging for screening or staging from 2010 through 2014 were included. Three readers independently rated BPE on CE spectral mammographic and MR images with the ordinal scale: minimal, mild, moderate, or marked. To assess pairwise agreement between BPE levels on CE spectral mammographic and MR images and among readers, weighted κ coefficients with quadratic weights were calculated. For overall agreement, mean κ values and bootstrapped 95% confidence intervals were calculated. The univariate and multivariate associations between BPE and clinical factors were examined by using generalized estimating equations separately for CE spectral mammography and MR imaging. Results Most women had minimal or mild BPE at both CE spectral mammography (68%–76%) and MR imaging (69%–76%). Between CE spectral mammography and MR imaging, the intrareader agreement ranged from moderate to substantial (κ = 0.55–0.67). Overall agreement on BPE levels between CE spectral mammography and MR imaging and among readers was substantial (κ = 0.66; 95% confidence interval: 0.61, 0.70). With both modalities, BPE demonstrated significant association with menopausal status, prior breast radiation therapy, hormonal treatment, breast density on CE spectral mammographic images, and amount of fibroglandular tissue on MR images (P < .001 for all). Conclusion There was substantial agreement between readers for BPE detected on CE spectral mammographic and MR images. © RSNA, 2016 PMID:27379544

  9. Hospital Volume and 30-Day Mortality for Three Common Medical Conditions

    PubMed Central

    Ross, Joseph S.; Normand, Sharon-Lise T.; Wang, Yun; Ko, Dennis T.; Chen, Jersey; Drye, Elizabeth E.; Keenan, Patricia S.; Lichtman, Judith H.; Bueno, Héctor; Schreiner, Geoffrey C.; Krumholz, Harlan M.

    2010-01-01

    Background The association between hospital volume and the death rate for patients who are hospitalized for acute myocardial infarction, heart failure, or pneumonia remains unclear. It is also not known whether a volume threshold for such an association exists. Methods We conducted cross-sectional analyses of data from Medicare administrative claims for all fee-for-service beneficiaries who were hospitalized between 2004 and 2006 in acute care hospitals in the United States for acute myocardial infarction, heart failure, or pneumonia. Using hierarchical logistic-regression models for each condition, we estimated the change in the odds of death within 30 days associated with an increase of 100 patients in the annual hospital volume. Analyses were adjusted for patients’ risk factors and hospital characteristics. Bootstrapping procedures were used to estimate 95% confidence intervals to identify the condition-specific volume thresholds above which an increased volume was not associated with reduced mortality. Results There were 734,972 hospitalizations for acute myocardial infarction in 4128 hospitals, 1,324,287 for heart failure in 4679 hospitals, and 1,418,252 for pneumonia in 4673 hospitals. An increased hospital volume was associated with reduced 30-day mortality for all conditions (P<0.001 for all comparisons). For each condition, the association between volume and outcome was attenuated as the hospital's volume increased. For acute myocardial infarction, once the annual volume reached 610 patients (95% confidence interval [CI], 539 to 679), an increase in the hospital volume by 100 patients was no longer significantly associated with reduced odds of death. The volume threshold was 500 patients (95% CI, 433 to 566) for heart failure and 210 patients (95% CI, 142 to 284) for pneumonia. Conclusions Admission to higher-volume hospitals was associated with a reduction in mortality for acute myocardial infarction, heart failure, and pneumonia, although there was a volume threshold above which an increased condition-specific hospital volume was no longer significantly associated with reduced mortality. PMID:20335587

  10. Family Structure as a Correlate of Organized Sport Participation among Youth

    PubMed Central

    McMillan, Rachel; McIsaac, Michael; Janssen, Ian

    2016-01-01

    Organized sport is one way that youth participate in physical activity. There are disparities in organized sport participation by family-related factors. The purpose of this study was to determine whether non-traditional family structure and physical custody arrangements are associated with organized sport participation in youth, and if so whether this relationship is mediated by socioeconomic status. Data were from the 2009–10 Health Behaviour in School-aged Children survey, a nationally representative cross-section of Canadian youth in grades 6–10 (N = 21,201). Information on family structure was derived from three survey items that asked participants the number of adults they lived with, their relationship to these adults, and if applicable, how often they visited another parent outside their home. Participants were asked whether or not they were currently involved in an organized sport. Logistic regression was used to compare the odds of organized sport participation according to family structure. Bootstrap-based mediation analysis was used to assess mediation by perceived family wealth. The results indicated that by comparison to traditional families, boys and girls from reconstituted families with irregular visitation of a second parent, reconstituted families with regular visitation of a second parent, single-parent families with irregular visitation of a second parent, and single-parent families with regular visitation of a second parent were less likely to participate in organized sport than those from traditional families, with odds ratios ranging from 0.48 (95% confidence interval: 0.38–0.61) to 0.78 (95% confidence interval: 0.56–1.08). The relationship between family structure and organized sport was significantly mediated by perceived family wealth, although the magnitude of the mediation was modest (ie, <20% change in effect estimate). In conclusion, youth living in both single-parent and reconstituted families experienced significant disparities in organized sport participation that was partially mediated by perceived family wealth. PMID:26863108

  11. Male patients require higher optimal effect-site concentrations of propofol during i-gel insertion with dexmedetomidine 0.5 μg/kg.

    PubMed

    Choi, Jung Ju; Kim, Ji Young; Lee, Dongchul; Chang, Young Jin; Cho, Noo Ree; Kwak, Hyun Jeong

    2016-03-22

    The pharmacokinetics and pharmacodynamics of an anesthetic drug may be influenced by gender. The purpose of this study was to compare effect-site half maximal effective concentrations (EC50) of propofol in male and female patients during i-gel insertion with dexmedetomidine 0.5 μg/kg without muscle relaxants. Forty patients, aged 20-46 years of ASA physical status I or II, were allocated to one of two groups by gender (20 patients per group). After the infusion of dexmedetomidine 0.5 μg/kg over 2 min, anesthesia was induced with a pre-determined effect-site concentration of propofol by target controlled infusion. Effect-site EC50 values of propofol for successful i-gel insertion were determined using the modified Dixon's up-and-down method. Mean effect-site EC50 ± SD of propofol for successful i-gel insertion was significantly higher for men than women (5.46 ± 0.26 μg/ml vs. 3.82 ± 0.34 μg/ml, p < 0.01). The EC50 of propofol in men was approximately 40% higher than in women. Using isotonic regression with a bootstrapping approach, the estimated EC50 (95% confidence interval) of propofol was also higher in men [5.32 (4.45-6.20) μg/ml vs. 3.75 (3.05-4.43) μg/ml]. The estimated EC95 (95% confidence interval) of propofol in men and women were 5.93 (4.72-6.88) μg/ml and 4.52 (3.02-5.70) μg/ml, respectively. During i-gel insertion with dexmedetomidine 0.5 μg/kg without muscle relaxant, male patients had higher effect-site EC50 for propofol using Schnider's model. Based on the results of this study, patient gender should be considered when determining the optimal dose of propofol during supraglottic airway insertion. ClinicalTrials.gov identifier: NCT02268656. Registered August 26, 2014.

  12. Family Structure as a Correlate of Organized Sport Participation among Youth.

    PubMed

    McMillan, Rachel; McIsaac, Michael; Janssen, Ian

    2016-01-01

    Organized sport is one way that youth participate in physical activity. There are disparities in organized sport participation by family-related factors. The purpose of this study was to determine whether non-traditional family structure and physical custody arrangements are associated with organized sport participation in youth, and if so whether this relationship is mediated by socioeconomic status. Data were from the 2009-10 Health Behaviour in School-aged Children survey, a nationally representative cross-section of Canadian youth in grades 6-10 (N = 21,201). Information on family structure was derived from three survey items that asked participants the number of adults they lived with, their relationship to these adults, and if applicable, how often they visited another parent outside their home. Participants were asked whether or not they were currently involved in an organized sport. Logistic regression was used to compare the odds of organized sport participation according to family structure. Bootstrap-based mediation analysis was used to assess mediation by perceived family wealth. The results indicated that by comparison to traditional families, boys and girls from reconstituted families with irregular visitation of a second parent, reconstituted families with regular visitation of a second parent, single-parent families with irregular visitation of a second parent, and single-parent families with regular visitation of a second parent were less likely to participate in organized sport than those from traditional families, with odds ratios ranging from 0.48 (95% confidence interval: 0.38-0.61) to 0.78 (95% confidence interval: 0.56-1.08). The relationship between family structure and organized sport was significantly mediated by perceived family wealth, although the magnitude of the mediation was modest (ie, <20% change in effect estimate). In conclusion, youth living in both single-parent and reconstituted families experienced significant disparities in organized sport participation that was partially mediated by perceived family wealth.

  13. Measurement of cardiac troponin I in healthy lactating dairy cows using a point of care analyzer (i-STAT-1).

    PubMed

    Labonté, Josiane; Roy, Jean-Philippe; Dubuc, Jocelyn; Buczinski, Sébastien

    2015-06-01

    Cardiac troponin I (cTnI) has been shown to be an accurate predictor of myocardial injury in cattle. The point-of-care i-STAT 1 immunoassay can be used to quantify blood cTnI in cattle. However, the cTnI reference interval in whole blood of healthy early lactating dairy cows remains unknown. To determine a blood cTnI reference interval in healthy early lactating Holstein dairy cows using the analyzer i-STAT 1. Forty healthy lactating dairy Holstein cows (0-60 days in milk) were conveniently selected from four commercial dairy farms. Each selected cow was examined by a veterinarian and transthoracic echocardiography was performed. A cow-side blood cTnI dosage was measured at the same time. A bootstrap statistical analysis method using unrestricted resampling was used to determine a reference interval for blood cTnI values. Forty healthy cows were recruited in the study. Median blood cTnI was 0.02 ng/mL (minimum: 0.00, maximum: 0.05). Based on the bootstrap analysis method with 40 cases, the 95th percentile of cTnI values in healthy cows was 0.036 ng/mL (90% CI: 0.02-0.05 ng/mL). A reference interval for blood cTnI values in healthy lactating cows was determined. Further research is needed to determine whether cTnI blood values could be used to diagnose and provide a prognosis for cardiac and noncardiac diseases in lactating dairy cows. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Neonatal Maturation of Paracetamol (Acetaminophen) Glucuronidation, Sulfation, and Oxidation Based on a Parent-Metabolite Population Pharmacokinetic Model

    PubMed Central

    Cook, Sarah F.; Stockmann, Chris; Samiee-Zafarghandy, Samira; King, Amber D.; Deutsch, Nina; Williams, Elaine F.; Wilkins, Diana G.; van den Anker, John N.

    2017-01-01

    Objectives This study aimed to model the population pharmacokinetics of intravenous paracetamol and its major metabolites in neonates and to identify influential patient characteristics, especially those affecting the formation clearance (CLformation) of oxidative pathway metabolites. Methods Neonates with a clinical indication for intravenous analgesia received five 15-mg/kg doses of paracetamol at 12-h intervals (<28 weeks’ gestation) or seven 15-mg/kg doses at 8-h intervals (≥28 weeks’ gestation). Plasma and urine were sampled throughout the 72-h study period. Concentration-time data for paracetamol, paracetamol-glucuronide, paracetamol-sulfate, and the combined oxidative pathway metabolites (paracetamol-cysteine and paracetamol-N-acetylcysteine) were simultaneously modeled in NONMEM 7.2. Results The model incorporated 259 plasma and 350 urine samples from 35 neonates with a mean gestational age of 33.6 weeks (standard deviation 6.6). CLformation for all metabolites increased with weight; CLformation for glucuronidation and oxidation also increased with postnatal age. At the mean weight (2.3 kg) and postnatal age (7.5 days), CLformation estimates (bootstrap 95% confidence interval; between-subject variability) were 0.049 L/h (0.038–0.062; 62 %) for glucuronidation, 0.21 L/h (0.17–0.24; 33 %) for sulfation, and 0.058 L/h (0.044–0.078; 72 %) for oxidation. Expression of individual oxidation CLformation as a fraction of total individual paracetamol clearance showed that, on average, fractional oxidation CLformation increased <15 % when plotted against weight or postnatal age. Conclusions The parent-metabolite model successfully characterized the pharmacokinetics of intravenous paracetamol and its metabolites in neonates. Maturational changes in the fraction of paracetamol undergoing oxidation were small relative to between-subject variability. PMID:27209292

  15. Neonatal Maturation of Paracetamol (Acetaminophen) Glucuronidation, Sulfation, and Oxidation Based on a Parent-Metabolite Population Pharmacokinetic Model.

    PubMed

    Cook, Sarah F; Stockmann, Chris; Samiee-Zafarghandy, Samira; King, Amber D; Deutsch, Nina; Williams, Elaine F; Wilkins, Diana G; Sherwin, Catherine M T; van den Anker, John N

    2016-11-01

    This study aimed to model the population pharmacokinetics of intravenous paracetamol and its major metabolites in neonates and to identify influential patient characteristics, especially those affecting the formation clearance (CL formation ) of oxidative pathway metabolites. Neonates with a clinical indication for intravenous analgesia received five 15-mg/kg doses of paracetamol at 12-h intervals (<28 weeks' gestation) or seven 15-mg/kg doses at 8-h intervals (≥28 weeks' gestation). Plasma and urine were sampled throughout the 72-h study period. Concentration-time data for paracetamol, paracetamol-glucuronide, paracetamol-sulfate, and the combined oxidative pathway metabolites (paracetamol-cysteine and paracetamol-N-acetylcysteine) were simultaneously modeled in NONMEM 7.2. The model incorporated 259 plasma and 350 urine samples from 35 neonates with a mean gestational age of 33.6 weeks (standard deviation 6.6). CL formation for all metabolites increased with weight; CL formation for glucuronidation and oxidation also increased with postnatal age. At the mean weight (2.3 kg) and postnatal age (7.5 days), CL formation estimates (bootstrap 95% confidence interval; between-subject variability) were 0.049 L/h (0.038-0.062; 62 %) for glucuronidation, 0.21 L/h (0.17-0.24; 33 %) for sulfation, and 0.058 L/h (0.044-0.078; 72 %) for oxidation. Expression of individual oxidation CL formation as a fraction of total individual paracetamol clearance showed that, on average, fractional oxidation CL formation increased <15 % when plotted against weight or postnatal age. The parent-metabolite model successfully characterized the pharmacokinetics of intravenous paracetamol and its metabolites in neonates. Maturational changes in the fraction of paracetamol undergoing oxidation were small relative to between-subject variability.

  16. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    PubMed

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  17. Prediction of resource volumes at untested locations using simple local prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  18. Phylogenomics provides strong evidence for relationships of butterflies and moths

    PubMed Central

    Kawahara, Akito Y.; Breinholt, Jesse W.

    2014-01-01

    Butterflies and moths constitute some of the most popular and charismatic insects. Lepidoptera include approximately 160 000 described species, many of which are important model organisms. Previous studies on the evolution of Lepidoptera did not confidently place butterflies, and many relationships among superfamilies in the megadiverse clade Ditrysia remain largely uncertain. We generated a molecular dataset with 46 taxa, combining 33 new transcriptomes with 13 available genomes, transcriptomes and expressed sequence tags (ESTs). Using HaMStR with a Lepidoptera-specific core-orthologue set of single copy loci, we identified 2696 genes for inclusion into the phylogenomic analysis. Nucleotides and amino acids of the all-gene, all-taxon dataset yielded nearly identical, well-supported trees. Monophyly of butterflies (Papilionoidea) was strongly supported, and the group included skippers (Hesperiidae) and the enigmatic butterfly–moths (Hedylidae). Butterflies were placed sister to the remaining obtectomeran Lepidoptera, and the latter was grouped with greater than or equal to 87% bootstrap support. Establishing confident relationships among the four most diverse macroheteroceran superfamilies was previously challenging, but we recovered 100% bootstrap support for the following relationships: ((Geometroidea, Noctuoidea), (Bombycoidea, Lasiocampoidea)). We present the first robust, transcriptome-based tree of Lepidoptera that strongly contradicts historical placement of butterflies, and provide an evolutionary framework for genomic, developmental and ecological studies on this diverse insect order. PMID:24966318

  19. Multiscale analysis of neural spike trains.

    PubMed

    Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin

    2014-01-30

    This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  1. Bootstrap evaluation of a young Douglas-fir height growth model for the Pacific Northwest

    Treesearch

    Nicholas R. Vaughn; Eric C. Turnblom; Martin W. Ritchie

    2010-01-01

    We evaluated the stability of a complex regression model developed to predict the annual height growth of young Douglas-fir. This model is highly nonlinear and is fit in an iterative manner for annual growth coefficients from data with multiple periodic remeasurement intervals. The traditional methods for such a sensitivity analysis either involve laborious math or...

  2. Quantification of Uncertainty in the Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.

    2017-12-01

    Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.

  3. Chronic disease and chronic disease risk factors among First Nations, Inuit and Métis populations of northern Canada.

    PubMed

    Bruce, S G; Riediger, N D; Lix, L M

    2014-11-01

    Aboriginal populations in northern Canada are experiencing rapid changes in their environments, which may negatively impact on health status. The purpose of our study was to compare chronic conditions and risk factors in northern Aboriginal populations, including First Nations (FN), Inuit and Métis populations, and northern non-Aboriginal populations. Data were from the Canadian Community Health Survey for the period from 2005 to 2008. Weighted multiple logistic regression models tested the association between ethnic groups and health outcomes. Model covariates were age, sex, territory of residence, education and income. Odds ratios (ORs) are reported and a bootstrap method calculated 95% confidence intervals (CIs) and p values. Odds of having at least one chronic condition was significantly lower for the Inuit (OR = 0.59; 95% CI: 0.43-0.81) than for non-Aboriginal population, but similar among FN, Métis and non-Aboriginal populations. Prevalence of many risk factors was significantly different for Inuit, FN and Métis populations. Aboriginal populations in Canada's north have heterogeneous health status. Continued chronic disease and risk factor surveillance will be important to monitor changes over time and to evaluate the impact of public health interventions.

  4. POLYMAT-C: a comprehensive SPSS program for computing the polychoric correlation matrix.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2015-09-01

    We provide a free noncommercial SPSS program that implements procedures for (a) obtaining the polychoric correlation matrix between a set of ordered categorical measures, so that it can be used as input for the SPSS factor analysis (FA) program; (b) testing the null hypothesis of zero population correlation for each element of the matrix by using appropriate simulation procedures; (c) obtaining valid and accurate confidence intervals via bootstrap resampling for those correlations found to be significant; and (d) performing, if necessary, a smoothing procedure that makes the matrix amenable to any FA estimation procedure. For the main purpose (a), the program uses a robust unified procedure that allows four different types of estimates to be obtained at the user's choice. Overall, we hope the program will be a very useful tool for the applied researcher, not only because it provides an appropriate input matrix for FA, but also because it allows the researcher to carefully check the appropriateness of the matrix for this purpose. The SPSS syntax, a short manual, and data files related to this article are available as Supplemental materials that are available for download with this article.

  5. Statistical Validation of Surrogate Endpoints: Another Look at the Prentice Criterion and Other Criteria.

    PubMed

    Saraf, Sanatan; Mathew, Thomas; Roy, Anindya

    2015-01-01

    For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.

  6. Assessing the limitations of the Banister model in monitoring training

    PubMed Central

    Hellard, Philippe; Avalos, Marta; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude; Millet, Grégoire P.

    2006-01-01

    The aim of this study was to carry out a statistical analysis of the Banister model to verify how useful it is in monitoring the training programmes of elite swimmers. The accuracy, the ill-conditioning and the stability of this model were thus investigated. Training loads of nine elite swimmers, measured over one season, were related to performances with the Banister model. Firstly, to assess accuracy, the 95% bootstrap confidence interval (95% CI) of parameter estimates and modelled performances were calculated. Secondly, to study ill-conditioning, the correlation matrix of parameter estimates was computed. Finally, to analyse stability, iterative computation was performed with the same data but minus one performance, chosen randomly. Performances were significantly related to training loads in all subjects (R2= 0.79 ± 0.13, P < 0.05) and the estimation procedure seemed to be stable. Nevertheless, the 95% CI of the most useful parameters for monitoring training were wide τa =38 (17, 59), τf =19 (6, 32), tn =19 (7, 35), tg =43 (25, 61). Furthermore, some parameters were highly correlated making their interpretation worthless. The study suggested possible ways to deal with these problems and reviewed alternative methods to model the training-performance relationships. PMID:16608765

  7. EARLY CHILDHOOD INVESTMENTS SUBSTANTIALLY BOOST ADULT HEALTH

    PubMed Central

    Campbell, Frances; Conti, Gabriella; Heckman, James J.; Moon, Seong Hyeok; Pinto, Rodrigo; Pungello, Elizabeth; Pan, Yi

    2014-01-01

    High-quality early childhood programs have been shown to have substantial benefits in reducing crime, raising earnings, and promoting education. Much less is known about their benefits for adult health. We report the long-term health impacts of one of the oldest and most heavily cited early childhood interventions with long-term follow-up evaluated by the method of randomization: the Carolina Abecedarian Project (ABC). Using recently collected biomedical data, we find that disadvantaged children randomly assigned to treatment have significantly lower prevalence of risk factors for cardiovascular and metabolic diseases in their mid-30s. The evidence is especially strong for males. The mean systolic blood pressure among the control males is 143, while only 126 among the treated. One in four males in the control group is affected by metabolic syndrome, while none in the treatment group is. To reach these conclusions, we address several statistical challenges. We use exact permutation tests to account for small sample sizes and conduct a parallel bootstrap confidence interval analysis to confirm the permutation analysis. We adjust inference to account for the multiple hypotheses tested and for nonrandom attrition. Our evidence shows the potential of early life interventions for preventing disease and promoting health. PMID:24675955

  8. An Empirical Analysis of the Impact of Recruitment Patterns on RDS Estimates among a Socially Ordered Population of Female Sex Workers in China

    PubMed Central

    Yamanis, Thespina J.; Merli, M. Giovanna; Neely, William Whipple; Tian, Felicia Feng; Moody, James; Tu, Xiaowen; Gao, Ersheng

    2013-01-01

    Respondent-driven sampling (RDS) is a method for recruiting “hidden” populations through a network-based, chain and peer referral process. RDS recruits hidden populations more effectively than other sampling methods and promises to generate unbiased estimates of their characteristics. RDS’s faithful representation of hidden populations relies on the validity of core assumptions regarding the unobserved referral process. With empirical recruitment data from an RDS study of female sex workers (FSWs) in Shanghai, we assess the RDS assumption that participants recruit nonpreferentially from among their network alters. We also present a bootstrap method for constructing the confidence intervals around RDS estimates. This approach uniquely incorporates real-world features of the population under study (e.g., the sample’s observed branching structure). We then extend this approach to approximate the distribution of RDS estimates under various peer recruitment scenarios consistent with the data as a means to quantify the impact of recruitment bias and of rejection bias on the RDS estimates. We find that the hierarchical social organization of FSWs leads to recruitment biases by constraining RDS recruitment across social classes and introducing bias in the RDS estimates. PMID:24288418

  9. Differentiating Dark Triad Traits Within and Across Interpersonal Circumplex Surfaces.

    PubMed

    Dowgwillo, Emily A; Pincus, Aaron L

    2017-01-01

    Recent discussions surrounding the Dark Triad (narcissism, psychopathy, and Machiavellianism) have centered on areas of distinctiveness and overlap. Given that interpersonal dysfunction is a core feature of Dark Triad traits, the current study uses self-report data from 562 undergraduate students to examine the interpersonal characteristics associated with narcissism, psychopathy, and Machiavellianism on four interpersonal circumplex (IPC) surfaces. The distinctiveness of these characteristics was examined using a novel bootstrapping methodology for computing confidence intervals around circumplex structural summary method parameters. Results suggest that Dark Triad traits exhibit distinct structural summary method parameters with narcissism characterized by high dominance, psychopathy characterized by a blend of high dominance and low affiliation, and Machiavellianism characterized by low affiliation on the problems, values, and efficacies IPC surfaces. Additionally, there was some heterogeneity in findings for different measures of psychopathy. Gender differences in structural summary parameters were examined, finding similar parameter values despite mean-level differences in Dark Triad traits. Finally, interpersonal information was integrated across different IPC surfaces to create profiles associated with each Dark Triad trait and to provide a more in-depth portrait of associated interpersonal dynamics. © The Author(s) 2016.

  10. A factor analysis of the SSQ (Speech, Spatial, and Qualities of Hearing Scale).

    PubMed

    Akeroyd, Michael A; Guy, Fiona H; Harrison, Dawn L; Suller, Sharon L

    2014-02-01

    The speech, spatial, and qualities of hearing questionnaire (SSQ) is a self-report test of auditory disability. The 49 items ask how well a listener would do in many complex listening situations illustrative of real life. The scores on the items are often combined into the three main sections or into 10 pragmatic subscales. We report here a factor analysis of the SSQ that we conducted to further investigate its statistical properties and to determine its structure. Statistical factor analysis of questionnaire data, using parallel analysis to determine the number of factors to retain, oblique rotation of factors, and a bootstrap method to estimate the confidence intervals. 1220 people who have attended MRC IHR over the last decade. We found three clear factors, essentially corresponding to the three main sections of the SSQ. They are termed "speech understanding", "spatial perception", and "clarity, separation, and identification". Thirty-five of the SSQ questions were included in the three factors. There was partial evidence for a fourth factor, "effort and concentration", representing two more questions. These results aid in the interpretation and application of the SSQ and indicate potential methods for generating average scores.

  11. The link between hypomania risk and creativity: The role of heightened behavioral activation system (BAS) sensitivity.

    PubMed

    Kim, Bin-Na; Kwon, Seok-Man

    2017-06-01

    The relationship between bipolar disorder (BD) and creativity is well-known; however, relatively little is known about its potential mechanism. We investigated whether heightened behavioral activation system (BAS) sensitivity may mediate such relationship. Korean young adults (N=543) completed self-report questionnaires that included the Hypomanic Personality Scale (HPS), the Behavioral Activation System(BAS) Scale, the Everyday Creativity Scale (ECS), the Positive Affect and Negative Affect Schedule (PANAS), and the Altman Self-Rating Mania Scale (ASRM). Correlational, hierarchical regression and mediation analyses using bootstrap confidence intervals were conducted. As predicted, BAS sensitivity was associated with self-reported creativity as well as hypomania risk and symptoms. Even when positive affect was controlled, BAS sensitivity predicted incrementally significant variance in explaining creativity. In mediation analysis, BAS sensitivity partially mediated the relation between hypomania risk and creativity. Reliance on self-report measures in assessing creativity and usage of non-clinical sample. BAS sensitivity was related not only to mood pathology but also to creativity. As a basic affective temperament, BAS sensitivity may help explain incompatible sides of adaptation associated with BD. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Experimental and environmental factors affect spurious detection of ecological thresholds

    USGS Publications Warehouse

    Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.

    2012-01-01

    Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.

  13. Evolutionary history of Stratiomyidae (Insecta: Diptera): the molecular phylogeny of a diverse family of flies.

    PubMed

    Brammer, Colin A; von Dohlen, Carol D

    2007-05-01

    Stratiomyidae is a cosmopolitan family of Brachycera (Diptera) that contains over 2800 species. This study focused on the relationships of members of the subfamily Clitellariinae, which has had a complicated taxonomic history. To investigate the monophyly of the Clitellariinae, the relationships of its genera, and the ages of Stratiomyidae lineages, representatives for all 12 subfamilies of Stratiomyidae, totaling 68 taxa, were included in a phylogenetic reconstruction. A Xylomyidae representative, Solva sp., was used as an outgroup. Sequences of EF-1alpha and 28S rRNA genes were analyzed under maximum parsimony with bootstrapping, and Bayesian methods to recover the best estimate of phylogeny. A chronogram with estimated dates for all nodes in the phylogeny was generated with the program, r8s, and divergence dates and confidence intervals were further explored with the program, multidivtime. All subfamilies of Stratiomyidae with more than one representative were found to be monophyletic, except for Stratiomyinae and Clitellariinae. Clitellariinae were distributed among five separate clades in the phylogeny, and Raphiocerinae were nested within Stratiomyinae. Dating analysis suggested an early Cretaceous origin for the common ancestor of extant Stratiomyidae, and a radiation of several major Stratiomyidae lineages in the Late Cretaceous.

  14. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  15. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…

  16. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  17. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  18. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  19. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  20. Race, Ethnicity, Language, Social Class, and Health Communication Inequalities: A Nationally-Representative Cross-Sectional Study

    PubMed Central

    Viswanath, Kasisomayajula; Ackerson, Leland K.

    2011-01-01

    Background While mass media communications can be an important source of health information, there are substantial social disparities in health knowledge that may be related to media use. The purpose of this study is to investigate how the use of cancer-related health communications is patterned by race, ethnicity, language, and social class. Methodology/Principal Findings In a nationally-representative cross-sectional telephone survey, 5,187 U.S. adults provided information about demographic characteristics, cancer information seeking, and attention to and trust in health information from television, radio, newspaper, magazines, and the Internet. Cancer information seeking was lowest among Spanish-speaking Hispanics (odds ratio: 0.42; 95% confidence interval: 0.28–0.63) compared to non-Hispanic whites. Spanish-speaking Hispanics were more likely than non-Hispanic whites to pay attention to (odds ratio: 3.10; 95% confidence interval: 2.07–4.66) and trust (odds ratio: 2.61; 95% confidence interval: 1.53–4.47) health messages from the radio. Non-Hispanic blacks were more likely than non-Hispanic whites to pay attention to (odds ratio: 2.39; 95% confidence interval: 1.88–3.04) and trust (odds ratio: 2.16; 95% confidence interval: 1.61–2.90) health messages on television. Those who were college graduates tended to pay more attention to health information from newspapers (odds ratio: 1.98; 95% confidence interval: 1.42–2.75), magazines (odds ratio: 1.86; 95% confidence interval: 1.32–2.60), and the Internet (odds ratio: 4.74; 95% confidence interval: 2.70–8.31) and had less trust in cancer-related health information from television (odds ratio: 0.44; 95% confidence interval: 0.32–0.62) and radio (odds ratio: 0.54; 95% confidence interval: 0.34–0.86) compared to those who were not high school graduates. Conclusions/Significance Health media use is patterned by race, ethnicity, language and social class. Providing greater access to and enhancing the quality of health media by taking into account factors associated with social determinants may contribute to addressing social disparities in health. PMID:21267450

  1. Preconceptional and prenatal supplementary folic acid and multivitamin intake and autism spectrum disorders.

    PubMed

    Virk, Jasveer; Liew, Zeyan; Olsen, Jørn; Nohr, Ellen A; Catov, Janet M; Ritz, Beate

    2016-08-01

    To evaluate whether early folic acid supplementation during pregnancy prevents diagnosis of autism spectrum disorders in offspring. Information on autism spectrum disorder diagnosis was obtained from the National Hospital Register and the Central Psychiatric Register. We estimated risk ratios for autism spectrum disorders for children whose mothers took folate or multivitamin supplements from 4 weeks prior from the last menstrual period through to 8 weeks after the last menstrual period (-4 to 8 weeks) by three 4-week periods. We did not find an association between early folate or multivitamin intake for autism spectrum disorder (folic acid-adjusted risk ratio: 1.06, 95% confidence interval: 0.82-1.36; multivitamin-adjusted risk ratio: 1.00, 95% confidence interval: 0.82-1.22), autistic disorder (folic acid-adjusted risk ratio: 1.18, 95% confidence interval: 0.76-1.84; multivitamin-adjusted risk ratio: 1.22, 95% confidence interval: 0.87-1.69), Asperger's syndrome (folic acid-adjusted risk ratio: 0.85, 95% confidence interval: 0.46-1.53; multivitamin-adjusted risk ratio: 0.95, 95% confidence interval: 0.62-1.46), or pervasive developmental disorder-not otherwise specified (folic acid-adjusted risk ratio: 1.07, 95% confidence interval: 0.75-1.54; multivitamin: adjusted risk ratio: 0.87, 95% confidence interval: 0.65-1.17) compared with women reporting no supplement use in the same period. We did not find any evidence to corroborate previous reports of a reduced risk for autism spectrum disorders in offspring of women using folic acid supplements in early pregnancy. © The Author(s) 2015.

  2. Copula based prediction models: an application to an aortic regurgitation study

    PubMed Central

    Kumar, Pranesh; Shoukri, Mohamed M

    2007-01-01

    Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974

  3. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  4. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  5. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  6. Screening Tool to Determine Risk of Having Muscle Dysmorphia Symptoms in Men Who Engage in Weight Training at a Gym.

    PubMed

    Palazón-Bru, Antonio; Rizo-Baeza, María M; Martínez-Segura, Asier; Folgado-de la Rosa, David M; Gil-Guillén, Vicente F; Cortés-Castell, Ernesto

    2018-03-01

    Although 2 screening tests exist for having a high risk of muscle dysmorphia (MD) symptoms, they both require a long time to apply. Accordingly, we proposed the construction, validation, and implementation of such a test in a mobile application using easy-to-measure factors associated with MD. Cross-sectional observational study. Gyms in Alicante (Spain) during 2013 to 2014. One hundred forty-one men who engaged in weight training. The variables are as follows: age, educational level, income, buys own food, physical activity per week, daily meals, importance of nutrition, special nutrition, guilt about dietary nonadherence, supplements, and body mass index (BMI). A points system was constructed through a binary logistic regression model to predict a high risk of MD symptoms by testing all possible combinations of secondary variables (5035). The system was validated using bootstrapping and implemented in a mobile application. High risk of having MD symptoms (Muscle Appearance Satisfaction Scale). Of the 141 participants, 45 had a high risk of MD symptoms [31.9%, 95% confidence interval (CI), 24.2%-39.6%]. The logistic regression model combination providing the largest area under the receiver operating characteristic curve (0.76) included the following: age [odds ratio (OR) = 0.90; 95% CI, 0.84-0.97, P = 0.007], guilt about dietary nonadherence (OR = 2.46; 95% CI, 1.06-5.73, P = 0.037), energy supplements (OR = 3.60; 95% CI, 1.54-8.44, P = 0.003), and BMI (OR = 1.33, 95% CI, 1.12-1.57, P < 0.001). The points system was validated through 1000 bootstrap samples. A quick, easy-to-use, 4-factor test that could serve as a screening tool for a high risk of MD symptoms has been constructed, validated, and implemented in a mobile application.

  7. Effects of the TRPV1 antagonist ABT-102 on body temperature in healthy volunteers: pharmacokinetic/pharmacodynamic analysis of three phase 1 trials

    PubMed Central

    Othman, Ahmed A; Nothaft, Wolfram; Awni, Walid M; Dutta, Sandeep

    2013-01-01

    Aim To characterize quantitatively the relationship between ABT-102, a potent and selective TRPV1 antagonist, exposure and its effects on body temperature in humans using a population pharmacokinetic/pharmacodynamic modelling approach. Methods Serial pharmacokinetic and body temperature (oral or core) measurements from three double-blind, randomized, placebo-controlled studies [single dose (2, 6, 18, 30 and 40 mg, solution formulation), multiple dose (2, 4 and 8 mg twice daily for 7 days, solution formulation) and multiple-dose (1, 2 and 4 mg twice daily for 7 days, solid dispersion formulation)] were analyzed. nonmem was used for model development and the model building steps were guided by pre-specified diagnostic and statistical criteria. The final model was qualified using non-parametric bootstrap and visual predictive check. Results The developed body temperature model included additive components of baseline, circadian rhythm (cosine function of time) and ABT-102 effect (Emax function of plasma concentration) with tolerance development (decrease in ABT-102 Emax over time). Type of body temperature measurement (oral vs. core) was included as a fixed effect on baseline, amplitude of circadian rhythm and residual error. The model estimates (95% bootstrap confidence interval) were: baseline oral body temperature, 36.3 (36.3, 36.4)°C; baseline core body temperature, 37.0 (37.0, 37.1)°C; oral circadian amplitude, 0.25 (0.22, 0.28)°C; core circadian amplitude, 0.31 (0.28, 0.34)°C; circadian phase shift, 7.6 (7.3, 7.9) h; ABT-102 Emax, 2.2 (1.9, 2.7)°C; ABT-102 EC50, 20 (15, 28) ng ml−1; tolerance T50, 28 (20, 43) h. Conclusions At exposures predicted to exert analgesic activity in humans, the effect of ABT-102 on body temperature is estimated to be 0.6 to 0.8°C. This effect attenuates within 2 to 3 days of dosing. PMID:22966986

  8. Soft-tissue anatomy of the extant hominoids: a review and phylogenetic analysis

    PubMed Central

    Gibbs, S; Collard, M; Wood, B

    2002-01-01

    This paper reports the results of a literature search for information about the soft-tissue anatomy of the extant non-human hominoid genera, Pan, Gorilla, Pongo and Hylobates, together with the results of a phylogenetic analysis of these data plus comparable data for Homo. Information on the four extant non-human hominoid genera was located for 240 out of the 1783 soft-tissue structures listed in the Nomina Anatomica. Numerically these data are biased so that information about some systems (e.g. muscles) and some regions (e.g. the forelimb) are over-represented, whereas other systems and regions (e.g. the veins and the lymphatics of the vascular system, the head region) are either under-represented or not represented at all. Screening to ensure that the data were suitable for use in a phylogenetic analysis reduced the number of eligible soft-tissue structures to 171. These data, together with comparable data for modern humans, were converted into discontinuous character states suitable for phylogenetic analysis and then used to construct a taxon-by-character matrix. This matrix was used in two tests of the hypothesis that soft-tissue characters can be relied upon to reconstruct hominoid phylogenetic relationships. In the first, parsimony analysis was used to identify cladograms requiring the smallest number of character state changes. In the second, the phylogenetic bootstrap was used to determine the confidence intervals of the most parsimonious clades. The parsimony analysis yielded a single most parsimonious cladogram that matched the molecular cladogram. Similarly the bootstrap analysis yielded clades that were compatible with the molecular cladogram; a (Homo, Pan) clade was supported by 95% of the replicates, and a (Gorilla, Pan, Homo) clade by 96%. These are the first hominoid morphological data to provide statistically significant support for the clades favoured by the molecular evidence. PMID:11833653

  9. Expanding Access to BRCA1/2 Genetic Counseling with Telephone Delivery: A Cluster Randomized Trial

    PubMed Central

    Butler, Karin M.; Schwartz, Marc D.; Mandelblatt, Jeanne S.; Boucher, Kenneth M.; Pappas, Lisa M.; Gammon, Amanda; Kohlmann, Wendy; Edwards, Sandra L.; Stroup, Antoinette M.; Buys, Saundra S.; Flores, Kristina G.; Campo, Rebecca A.

    2014-01-01

    Background The growing demand for cancer genetic services underscores the need to consider approaches that enhance access and efficiency of genetic counseling. Telephone delivery of cancer genetic services may improve access to these services for individuals experiencing geographic (rural areas) and structural (travel time, transportation, childcare) barriers to access. Methods This cluster-randomized clinical trial used population-based sampling of women at risk for BRCA1/2 mutations to compare telephone and in-person counseling for: 1) equivalency of testing uptake and 2) noninferiority of changes in psychosocial measures. Women 25 to 74 years of age with personal or family histories of breast or ovarian cancer and who were able to travel to one of 14 outreach clinics were invited to participate. Randomization was by family. Assessments were conducted at baseline one week after pretest and post-test counseling and at six months. Of the 988 women randomly assigned, 901 completed a follow-up assessment. Cluster bootstrap methods were used to estimate the 95% confidence interval (CI) for the difference between test uptake proportions, using a 10% equivalency margin. Differences in psychosocial outcomes for determining noninferiority were estimated using linear models together with one-sided 97.5% bootstrap CIs. Results Uptake of BRCA1/2 testing was lower following telephone (21.8%) than in-person counseling (31.8%, difference = 10.2%, 95% CI = 3.9% to 16.3%; after imputation of missing data: difference = 9.2%, 95% CI = -0.1% to 24.6%). Telephone counseling fulfilled the criteria for noninferiority to in-person counseling for all measures. Conclusions BRCA1/2 telephone counseling, although leading to lower testing uptake, appears to be safe and as effective as in-person counseling with regard to minimizing adverse psychological reactions, promoting informed decision making, and delivering patient-centered communication for both rural and urban women. PMID:25376862

  10. Can the Direct Medical Cost of Chronic Disease Be Transferred across Different Countries? Using Cost-of-Illness Studies on Type 2 Diabetes, Epilepsy and Schizophrenia as Examples

    PubMed Central

    Gao, Lan; Hu, Hao; Zhao, Fei-Li; Li, Shu-Chuen

    2016-01-01

    Objectives To systematically review cost of illness studies for schizophrenia (SC), epilepsy (EP) and type 2 diabetes mellitus (T2DM) and explore the transferability of direct medical cost across countries. Methods A comprehensive literature search was performed to yield studies that estimated direct medical costs. A generalized linear model (GLM) with gamma distribution and log link was utilized to explore the variation in costs that accounted by the included factors. Both parametric (Random-effects model) and non-parametric (Boot-strapping) meta-analyses were performed to pool the converted raw cost data (expressed as percentage of GDP/capita of the country where the study was conducted). Results In total, 93 articles were included (40 studies were for T2DM, 34 studies for EP and 19 studies for SC). Significant variances were detected inter- and intra-disease classes for the direct medical costs. Multivariate analysis identified that GDP/capita (p<0.05) was a significant factor contributing to the large variance in the cost results. Bootstrapping meta-analysis generated more conservative estimations with slightly wider 95% confidence intervals (CI) than the parametric meta-analysis, yielding a mean (95%CI) of 16.43% (11.32, 21.54) for T2DM, 36.17% (22.34, 50.00) for SC and 10.49% (7.86, 13.41) for EP. Conclusions Converting the raw cost data into percentage of GDP/capita of individual country was demonstrated to be a feasible approach to transfer the direct medical cost across countries. The approach from our study to obtain an estimated direct cost value along with the size of specific disease population from each jurisdiction could be used for a quick check on the economic burden of particular disease for countries without such data. PMID:26814959

  11. Is the maturity of hospitals' quality improvement systems associated with measures of quality and patient safety?

    PubMed Central

    2011-01-01

    Background Previous research addressed the development of a classification scheme for quality improvement systems in European hospitals. In this study we explore associations between the 'maturity' of the hospitals' quality improvement system and clinical outcomes. Methods The maturity classification scheme was developed based on survey results from 389 hospitals in eight European countries. We matched the hospitals from the Spanish sample (113 hospitals) with those hospitals participating in a nation-wide, voluntary hospital performance initiative. We then compared sample distributions and explored associations between the 'maturity' of the hospitals' quality improvement system and a range of composite outcomes measures, such as adjusted hospital-wide mortality, -readmission, -complication and -length of stay indices. Statistical analysis includes bivariate correlations for parametrically and non-parametrically distributed data, multiple robust regression models and bootstrapping techniques to obtain confidence-intervals for the correlation and regression estimates. Results Overall, 43 hospitals were included. Compared to the original sample of 113, this sample was characterized by a higher representation of university hospitals. Maturity of the quality improvement system was similar, although the matched sample showed less variability. Analysis of associations between the quality improvement system and hospital-wide outcomes suggests significant correlations for the indicator adjusted hospital complications, borderline significance for adjusted hospital readmissions and non-significance for the adjusted hospital mortality and length of stay indicators. These results are confirmed by the bootstrap estimates of the robust regression model after adjusting for hospital characteristics. Conclusions We assessed associations between hospitals' quality improvement systems and clinical outcomes. From this data it seems that having a more developed quality improvement system is associated with lower rates of adjusted hospital complications. A number of methodological and logistic hurdles remain to link hospital quality improvement systems to outcomes. Further research should aim at identifying the latent dimensions of quality improvement systems that predict quality and safety outcomes. Such research would add pertinent knowledge regarding the implementation of organizational strategies related with quality of care outcomes. PMID:22185479

  12. Clinical Risk Index for Babies score for the prediction of neurodevelopmental outcomes at 3 years of age in infants of very low birthweight.

    PubMed

    Lodha, Abhay; Sauvé, Reg; Chen, Sophie; Tang, Selphee; Christianson, Heather

    2009-11-01

    In this study, we evaluated the Clinical Risk Index for Babies - revised (CRIB-II) score as a predictor of long-term neurodevelopmental outcomes in preterm infants at 36 months' corrected age. CRIB-II scores, which include birthweight, gestational age, sex, admission temperature, and base excess, were recorded prospectively on all infants weighing 1250g or less admitted to the neonatal intensive care unit (NICU). The sensitivity and specificity of CRIB-II scores to predict poor outcomes were examined using receiver operating characteristic curves, and predictive accuracy was assessed using the area under the curve (AUC), based on the observed values entered on a continuous scale. Poor outcomes were defined as death or major neurodevelopmental disability (cerebral palsy, neurosensory hearing loss requiring amplification, legal blindness, severe seizure disorder, or cognitive score >2SD below the mean for adjusted age determined by clinical neurological examination and on the Wechsler Preschool and Primary Scale of Intelligence, Bayley Scales of Infant Development, or revised Leiter International Performance Scale). Of the 180 infants admitted to the NICU, 155 survived. Complete follow-up data were available for 107 children. The male:female ratio was 50:57 (47-53%), median birthweight was 930g (range 511-1250g), and median gestational age was 27 weeks (range 23-32wks). Major neurodevelopmental impairment was observed in 11.2% of participants. In a regression model, the CRIB-II score was significantly correlated with long-term neurodevelopmental outcomes. It predicted major neurodevelopmental impairment (odds ratio [OR] 1.57, bootstrap 95% confidence interval [CI] 1.26-3.01; AUC 0.84) and poor outcome (OR 1.46; bootstrap 95% CI 1.31-1.71, AUC 0.82) at 36 months' corrected age. CRIB-II scores of 13 or more in the first hour of life can reliably predict major neurodevelopmental impairment at 36 months' corrected age (sensitivity 83%; specificity 84%).

  13. Maternal and neonatal outcomes of antenatal anemia in a Scottish population: a retrospective cohort study.

    PubMed

    Rukuni, Ruramayi; Bhattacharya, Sohinee; Murphy, Michael F; Roberts, David; Stanworth, Simon J; Knight, Marian

    2016-05-01

    Antenatal anemia is a major public health problem in the UK, yet there is limited high quality evidence for associated poor clinical outcomes. The objectives of this study were to estimate the incidence and clinical outcomes of antenatal anemia in a Scottish population. A retrospective cohort study of 80 422 singleton pregnancies was conducted using data from the Aberdeen Maternal and Neonatal Databank between 1995 and 2012. Antenatal anemia was defined as haemoglobin ≤ 10 g/dl during pregnancy. Incidence was calculated with 95% confidence intervals and compared over time using a chi-squared test for trend. Multivariable logistic regression was used to adjust for confounding variables. Results are presented as adjusted odds ratios with 95% confidence interval. The overall incidence of antenatal anemia was 9.3 cases/100 singleton pregnancies (95% confidence interval 9.1-9.5), decreasing from 16.9/100 to 4.1/100 singleton pregnancies between 1995 and 2012 (p < 0.001). Maternal anemia was associated with antepartum hemorrhage (adjusted odds ratio 1.26, 95% confidence interval 1.17-1.36), postpartum infection (adjusted odds ratio 1.89, 95% confidence interval 1.39-2.57), transfusion (adjusted odds ratio 1.87, 95% confidence interval 1.65-2.13) and stillbirth (adjusted odds ratio 1.42, 95% confidence interval 1.04-1.94), reduced odds of postpartum hemorrhage (adjusted odds ratio 0.92, 95% confidence interval 0.86-0.98) and low birthweight (adjusted odds ratio 0.77, 95% confidence interval 0.69-0.86). No other outcomes were statistically significant. This study shows the incidence of antenatal anemia is decreasing steadily within this Scottish population. However, given that anemia is a readily correctable risk factor for major causes of morbidity and mortality in the UK, further work is required to investigate appropriate preventive measures. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  14. Opioid analgesia in mechanically ventilated children: results from the multicenter Measuring Opioid Tolerance Induced by Fentanyl study.

    PubMed

    Anand, Kanwaljeet J S; Clark, Amy E; Willson, Douglas F; Berger, John; Meert, Kathleen L; Zimmerman, Jerry J; Harrison, Rick; Carcillo, Joseph A; Newth, Christopher J L; Bisping, Stephanie; Holubkov, Richard; Dean, J Michael; Nicholson, Carol E

    2013-01-01

    To examine the clinical factors associated with increased opioid dose among mechanically ventilated children in the pediatric intensive care unit. Prospective, observational study with 100% accrual of eligible patients. Seven pediatric intensive care units from tertiary-care children's hospitals in the Collaborative Pediatric Critical Care Research Network. Four hundred nineteen children treated with morphine or fentanyl infusions. None. Data on opioid use, concomitant therapy, demographic and explanatory variables were collected. Significant variability occurred in clinical practices, with up to 100-fold differences in baseline opioid doses, average daily or total doses, or peak infusion rates. Opioid exposure for 7 or 14 days required doubling of the daily opioid dose in 16% patients (95% confidence interval 12%-19%) and 20% patients (95% confidence interval 16%-24%), respectively. Among patients receiving opioids for longer than 3 days (n = 225), this occurred in 28% (95% confidence interval 22%-33%) and 35% (95% confidence interval 29%-41%) by 7 or 14 days, respectively. Doubling of the opioid dose was more likely to occur following opioid infusions for 7 days or longer (odds ratio 7.9, 95% confidence interval 4.3-14.3; p < 0.001) or co-therapy with midazolam (odds ratio 5.6, 95% confidence interval 2.4-12.9; p < 0.001), and it was less likely to occur if morphine was used as the primary opioid (vs. fentanyl) (odds ratio 0.48, 95% confidence interval 0.25-0.92; p = 0.03), for patients receiving higher initial doses (odds ratio 0.96, 95% confidence interval 0.95-0.98; p < 0.001), or if patients had prior pediatric intensive care unit admissions (odds ratio 0.37, 95% confidence interval 0.15-0.89; p = 0.03). Mechanically ventilated children require increasing opioid doses, often associated with prolonged opioid exposure or the need for additional sedation. Efforts to reduce prolonged opioid exposure and clinical practice variation may prevent the complications of opioid therapy.

  15. Do Savings Mediate Changes in Adolescents' Future Orientation and Health-Related Outcomes? Findings From Randomized Experiment in Uganda.

    PubMed

    Karimli, Leyla; Ssewamala, Fred M

    2015-10-01

    This present study tests the proposition that an economic strengthening intervention for families caring for AIDS-orphaned adolescents would positively affect adolescent future orientation and psychosocial outcomes through increased asset accumulation (in this case, by increasing family savings). Using longitudinal data from the cluster-randomized experiment, we ran generalized estimating equation models with robust standard errors clustering on individual observations. To examine whether family savings mediate the effect of the intervention on adolescents' future orientation and psychosocial outcomes, analyses were conducted in three steps: (1) testing the effect of intervention on mediator; (2) testing the effect of mediator on outcomes, controlling for the intervention; and (3) testing the significance of mediating effect using Sobel-Goodman method. Asymmetric confidence intervals for mediated effect were obtained through bootstrapping-to address the assumption of normal distribution. Results indicate that participation in a matched Child Savings Account (CSA) program improved adolescents' future orientation and psychosocial outcomes by reducing hopelessness, enhancing self-concept, and improving adolescents' confidence about their educational plans. However, the positive intervention effect on adolescent future orientation and psychosocial outcomes was not transmitted through saving. In other words, participation in the matched CSA program improved adolescent future orientation and psychosocial outcomes regardless of its impact on reported savings. Further research is necessary to understand exactly how participation in economic strengthening interventions, for example, those that employ matched CSAs, shape adolescent future orientation and psychosocial outcomes: what, if not savings, transmits the treatment effect and how? Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  16. First-order kinetic gas generation model parameters for wet landfills.

    PubMed

    Faour, Ayman A; Reinhart, Debra R; You, Huaxin

    2007-01-01

    Landfill gas collection data from wet landfill cells were analyzed and first-order gas generation model parameters were estimated for the US EPA landfill gas emissions model (LandGEM). Parameters were determined through statistical comparison of predicted and actual gas collection. The US EPA LandGEM model appeared to fit the data well, provided it is preceded by a lag phase, which on average was 1.5 years. The first-order reaction rate constant, k, and the methane generation potential, L(o), were estimated for a set of landfills with short-term waste placement and long-term gas collection data. Mean and 95% confidence parameter estimates for these data sets were found using mixed-effects model regression followed by bootstrap analysis. The mean values for the specific methane volume produced during the lag phase (V(sto)), L(o), and k were 33 m(3)/Megagrams (Mg), 76 m(3)/Mg, and 0.28 year(-1), respectively. Parameters were also estimated for three full scale wet landfills where waste was placed over many years. The k and L(o) estimated for these landfills were 0.21 year(-1), 115 m(3)/Mg, 0.11 year(-1), 95 m(3)/Mg, and 0.12 year(-1) and 87 m(3)/Mg, respectively. A group of data points from wet landfills cells with short-term data were also analyzed. A conservative set of parameter estimates was suggested based on the upper 95% confidence interval parameters as a k of 0.3 year(-1) and a L(o) of 100 m(3)/Mg if design is optimized and the lag is minimized.

  17. A haemophilia disease management programme targeting cost and utilization of specialty pharmaceuticals.

    PubMed

    Duncan, N; Roberson, C; Lail, A; Donfield, S; Shapiro, A

    2014-07-01

    The high cost of clotting factor concentrate (CFC) used to treat haemophilia and von Willebrand disease (VWD) attracts health plans' attention for cost management strategies such as disease management programmes (DMPs). In 2004, Indiana's high risk insurance health plan, the Indiana Comprehensive Health Insurance Association, in partnership with the Indiana Hemophilia and Thrombosis Center developed and implemented a DMP for beneficiaries with bleeding disorders. This report evaluates the effectiveness of the DMP 5 years post implementation, with specific emphasis on the cost of CFC and other medical expenditures by severity of disease. A pre/post analysis was used. The main evaluation measures were total cost, total outpatient CFC IU dispensed and adjusted total outpatient CFC cost. Summary statistics and mean and median plots were calculated. Overall, 1000 non-parametric bootstrap replicates were created and percentile confidence limits for 95% confidence intervals (CI) are reported. Mean emergency department (ED) visits and mean and median duration of hospitalizations are also reported. The DMP was associated with a significant decrease in mean annualized total cost including decreased CFC utilization and cost in most years in the overall group, and specifically in patients with severe haemophilia. Patients with mild and moderate haemophilia contributed little to overall programme expenditures. This specialty health care provider-administered DMP exemplifies the success of targeted interventions developed and implemented through a health care facility expert in the disease state to curb the cost of specialty pharmaceuticals in conditions when their expenditures represent a significant portion of total annual costs of care. © 2014 John Wiley & Sons Ltd.

  18. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

    ERIC Educational Resources Information Center

    Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

    2012-01-01

    The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

  19. Using Asymptotic Results to Obtain a Confidence Interval for the Population Median

    ERIC Educational Resources Information Center

    Jamshidian, M.; Khatoonabadi, M.

    2007-01-01

    Almost all introductory and intermediate level statistics textbooks include the topic of confidence interval for the population mean. Almost all these texts introduce the median as a robust measure of central tendency. Only a few of these books, however, cover inference on the population median and in particular confidence interval for the median.…

  20. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  1. Confidence intervals from single observations in forest research

    Treesearch

    Harry T. Valentine; George M. Furnival; Timothy G. Gregoire

    1991-01-01

    A procedure for constructing confidence intervals and testing hypothese from a single trial or observation is reviewed. The procedure requires a prior, fixed estimate or guess of the outcome of an experiment or sampling. Two examples of applications are described: a confidence interval is constructed for the expected outcome of a systematic sampling of a forested tract...

  2. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  3. Phylogenomics provides strong evidence for relationships of butterflies and moths.

    PubMed

    Kawahara, Akito Y; Breinholt, Jesse W

    2014-08-07

    Butterflies and moths constitute some of the most popular and charismatic insects. Lepidoptera include approximately 160 000 described species, many of which are important model organisms. Previous studies on the evolution of Lepidoptera did not confidently place butterflies, and many relationships among superfamilies in the megadiverse clade Ditrysia remain largely uncertain. We generated a molecular dataset with 46 taxa, combining 33 new transcriptomes with 13 available genomes, transcriptomes and expressed sequence tags (ESTs). Using HaMStR with a Lepidoptera-specific core-orthologue set of single copy loci, we identified 2696 genes for inclusion into the phylogenomic analysis. Nucleotides and amino acids of the all-gene, all-taxon dataset yielded nearly identical, well-supported trees. Monophyly of butterflies (Papilionoidea) was strongly supported, and the group included skippers (Hesperiidae) and the enigmatic butterfly-moths (Hedylidae). Butterflies were placed sister to the remaining obtectomeran Lepidoptera, and the latter was grouped with greater than or equal to 87% bootstrap support. Establishing confident relationships among the four most diverse macroheteroceran superfamilies was previously challenging, but we recovered 100% bootstrap support for the following relationships: ((Geometroidea, Noctuoidea), (Bombycoidea, Lasiocampoidea)). We present the first robust, transcriptome-based tree of Lepidoptera that strongly contradicts historical placement of butterflies, and provide an evolutionary framework for genomic, developmental and ecological studies on this diverse insect order. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. Depressive symptoms in nonresident african american fathers and involvement with their sons.

    PubMed

    Davis, R Neal; Caldwell, Cleopatra Howard; Clark, Sarah J; Davis, Matthew M

    2009-12-01

    Our objective was to determine whether paternal depressive symptoms were associated with less father involvement among African American fathers not living with their children (ie, nonresident fathers). We analyzed survey data for 345 fathers enrolled in a program for nonresident African American fathers and their preteen sons. Father involvement included measures of contact, closeness, monitoring, communication, and conflict. We used bivariate analyses and multivariate logistic regression analysis to examine associations between father involvement and depressive symptoms. Thirty-six percent of fathers reported moderate depressive symptoms, and 11% reported severe depressive symptoms. In bivariate analyses, depressive symptoms were associated with less contact, less closeness, low monitoring, and increased conflict. In multivariate analyses controlling for basic demographic features, fathers with moderate depressive symptoms were more likely to have less contact (adjusted odds ratio: 1.7 [95% confidence interval: 1.1-2.8]), less closeness (adjusted odds ratio: 2.1 [95% confidence interval: 1.3-3.5]), low monitoring (adjusted odds ratio: 2.7 [95% confidence interval: 1.4-5.2]), and high conflict (adjusted odds ratio: 2.1 [95% confidence interval: 1.2-3.6]). Fathers with severe depressive symptoms also were more likely to have less contact (adjusted odds ratio: 3.1 [95% confidence interval: 1.4-7.2]), less closeness (adjusted odds ratio: 2.6 [95% confidence interval: 1.2-5.7]), low monitoring (adjusted odds ratio: 2.8 [95% confidence interval: 1.1-7.1]), and high conflict (adjusted odds ratio: 2.6 [95% confidence interval: 1.1-5.9]). Paternal depressive symptoms may be an important, but modifiable, barrier for nonresident African American fathers willing to be more involved with their children.

  5. Ethnic variations in morbidity and mortality from lower respiratory tract infections: a retrospective cohort study.

    PubMed

    Simpson, Colin R; Steiner, Markus Fc; Cezard, Genevieve; Bansal, Narinder; Fischbacher, Colin; Douglas, Anne; Bhopal, Raj; Sheikh, Aziz

    2015-10-01

    There is evidence of substantial ethnic variations in asthma morbidity and the risk of hospitalisation, but the picture in relation to lower respiratory tract infections is unclear. We carried out an observational study to identify ethnic group differences for lower respiratory tract infections. A retrospective, cohort study. Scotland. 4.65 million people on whom information was available from the 2001 census, followed from May 2001 to April 2010. Hospitalisations and deaths (any time following first hospitalisation) from lower respiratory tract infections, adjusted risk ratios and hazard ratios by ethnicity and sex were calculated. We multiplied ratios and confidence intervals by 100, so the reference Scottish White population's risk ratio and hazard ratio was 100. Among men, adjusted risk ratios for lower respiratory tract infection hospitalisation were lower in Other White British (80, 95% confidence interval 73-86) and Chinese (69, 95% confidence interval 56-84) populations and higher in Pakistani groups (152, 95% confidence interval 136-169). In women, results were mostly similar to those in men (e.g. Chinese 68, 95% confidence interval 56-82), although higher adjusted risk ratios were found among women of the Other South Asians group (145, 95% confidence interval 120-175). Survival (adjusted hazard ratio) following lower respiratory tract infection for Pakistani men (54, 95% confidence interval 39-74) and women (31, 95% confidence interval 18-53) was better than the reference population. Substantial differences in the rates of lower respiratory tract infections amongst different ethnic groups in Scotland were found. Pakistani men and women had particularly high rates of lower respiratory tract infection hospitalisation. The reasons behind the high rates of lower respiratory tract infection in the Pakistani community are now required. © The Royal Society of Medicine.

  6. Risk factors of childhood asthma in children attending Lyari General Hospital.

    PubMed

    Kamran, Amber; Hanif, Shahina; Murtaza, Ghulam

    2015-06-01

    To determine the factors associated with asthma in children. The case-control study was conducted in the paediatrics clinic of Lyari General Hospital, Karachi, from May to October 2010. Children 1-15 years of age attending the clinic represented the cases, while the control group had children who were closely related (sibling or cousin) to the cases but did not have the symptoms of disease at the time. Data was collected through a proforma and analysed using SPSS 10. Of the total 346 subjects, 173(50%) each comprised the two groups. According to univariable analysis the risk factors were presence of at least one smoker (odds ratio: 3.6; 95% confidence interval: 2.3-5.8), resident of kacha house (odds ratio: 16.2; 95% confidence interval: 3.8-69.5),living in room without windows (odds ratio: 9.3; 95% confidence interval: 2.1-40.9) and living in houses without adequate sunlight (odds ratio: 1.6; 95% confidence interval: 1.2-2.4).Using multivariable modelling, family history of asthma (odds ratio: 5.9; 95% confidence interval: 3.1-11.6), presence of at least one smoker at home (odds ratio: 4.1; 95% confidence interval: 2.3-7.2), people living in a room without a window (odds ratio: 5.5; 95% confidence interval: 1.15-26.3) and people living in an area without adequate sunlight (odds ratio: 2.2; 95% confidence interval: 1.13-4.31) were found to be independent risk factors of asthma in children adjusting for age, gender and history of weaning. Family history of asthma, children living with at least one smoker at home, room without windows and people living in an area without sunlight were major risk factors of childhood asthma.

  7. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  8. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  9. Risk factors for low birth weight according to the multiple logistic regression model. A retrospective cohort study in José María Morelos municipality, Quintana Roo, Mexico.

    PubMed

    Franco Monsreal, José; Tun Cobos, Miriam Del Ruby; Hernández Gómez, José Ricardo; Serralta Peraza, Lidia Esther Del Socorro

    2018-01-17

    Low birth weight has been an enigma for science over time. There have been many researches on its causes and its effects. Low birth weight is an indicator that predicts the probability of a child surviving. In fact, there is an exponential relationship between weight deficit, gestational age, and perinatal mortality. Multiple logistic regression is one of the most expressive and versatile statistical instruments available for the analysis of data in both clinical and epidemiology settings, as well as in public health. To assess in a multivariate fashion the importance of 17 independent variables in low birth weight (dependent variable) of children born in the Mayan municipality of José María Morelos, Quintana Roo, Mexico. Analytical observational epidemiological cohort study with retrospective temporality. Births that met the inclusion criteria occurred in the "Hospital Integral Jose Maria Morelos" of the Ministry of Health corresponding to the Maya municipality of Jose Maria Morelos during the period from August 1, 2014 to July 31, 2015. The total number of newborns recorded was 1,147; 84 of which (7.32%) had low birth weight. To estimate the independent association between the explanatory variables (potential risk factors) and the response variable, a multiple logistic regression analysis was performed using the IBM SPSS Statistics 22 software. In ascending numerical order values of odds ratio > 1 indicated the positive contribution of explanatory variables or possible risk factors: "unmarried" marital status (1.076, 95% confidence interval: 0.550 to 2.104); age at menarche ≤ 12 years (1.08, 95% confidence interval: 0.64 to 1.84); history of abortion(s) (1.14, 95% confidence interval: 0.44 to 2.93); maternal weight < 50 kg (1.51, 95% confidence interval: 0.83 to 2.76); number of prenatal consultations ≤ 5 (1.86, 95% confidence interval: 0.94 to 3.66); maternal age ≥ 36 years (3.5, 95% confidence interval: 0.40 to 30.47); maternal age ≤ 19 years (3.59, 95% confidence interval: 0.43 to 29.87); number of deliveries = 1 (3.86, 95% confidence interval: 0.33 to 44.85); personal pathological history (4.78, 95% confidence interval: 2.16 to 10.59); pathological obstetric history (5.01, 95% confidence interval: 1.66 to 15.18); maternal height < 150 cm (5.16, 95% confidence interval: 3.08 to 8.65); number of births ≥ 5 (5.99, 95% confidence interval: 0.51 to 69.99); and smoking (15.63, 95% confidence interval: 1.07 to 227.97). Four of the independent variables (personal pathological history, obstetric pathological history, maternal stature <150 centimeters and smoking) showed a significant positive contribution, thus they can be considered as clear risk factors for low birth weight. The use of the logistic regression model in the Mayan municipality of José María Morelos, will allow estimating the probability of low birth weight for each pregnant woman in the future, which will be useful for the health authorities of the region.

  10. Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…

  11. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2012-01-01

    Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

  12. Patient, surgeon, and hospital disparities associated with benign hysterectomy approach and perioperative complications.

    PubMed

    Mehta, Ambar; Xu, Tim; Hutfless, Susan; Makary, Martin A; Sinno, Abdulrahman K; Tanner, Edward J; Stone, Rebecca L; Wang, Karen; Fader, Amanda N

    2017-05-01

    Hysterectomy is among the most common major surgical procedures performed in women. Approximately 450,000 hysterectomy procedures are performed each year in the United States for benign indications. However, little is known regarding contemporary US hysterectomy trends for women with benign disease with respect to operative technique and perioperative complications, and the association between these 2 factors with patient, surgeon, and hospital characteristics. We sought to describe contemporary hysterectomy trends and explore associations between patient, surgeon, and hospital characteristics with surgical approach and perioperative complications. Hysterectomies performed for benign indications by general gynecologists from July 2012 through September 2014 were analyzed in the all-payer Maryland Health Services Cost Review Commission database. We excluded hysterectomies performed by gynecologic oncologists, reproductive endocrinologists, and female pelvic medicine and reconstructive surgeons. We included both open hysterectomies and those performed by minimally invasive surgery, which included vaginal hysterectomies. Perioperative complications were defined using the Agency for Healthcare Research and Quality patient safety indicators. Surgeon hysterectomy volume during the 2-year study period was analyzed (0-5 cases annually = very low, 6-10 = low, 11-20 = medium, and ≥21 = high). We utilized logistic regression and negative binomial regression to identify patient, surgeon, and hospital characteristics associated with minimally invasive surgery utilization and perioperative complications, respectively. A total of 5660 hospitalizations were identified during the study period. Most patients (61.5%) had an open hysterectomy; 38.5% underwent a minimally invasive surgery procedure (25.1% robotic, 46.6% laparoscopic, 28.3% vaginal). Most surgeons (68.2%) were very low- or low-volume surgeons. Factors associated with a lower likelihood of undergoing minimally invasive surgery included older patient age (reference 45-64 years; 20-44 years: adjusted odds ratio, 1.16; 95% confidence interval, 1.05-1.28), black race (reference white; adjusted odds ratio, 0.70; 95% confidence interval, 0.63-0.78), Hispanic ethnicity (adjusted odds ratio, 0.62; 95% confidence interval, 0.48-0.80), smaller hospital (reference large; small: adjusted odds ratio, 0.26; 95% confidence interval, 0.15-0.45; medium: adjusted odds ratio, 0.87; 95% confidence interval, 0.79-0.96), medium hospital hysterectomy volume (reference ≥200 hysterectomies; 100-200: adjusted odds ratio, 0.78; 95% confidence interval, 0.71-0.87), and medium vs high surgeon volume (reference high; medium: adjusted odds ratio, 0.87; 95% confidence interval, 0.78-0.97). Complications occurred in 25.8% of open and 8.2% of minimally invasive hysterectomies (P < .0001). Minimally invasive hysterectomy (adjusted odds ratio, 0.22; 95% confidence interval, 0.17-0.27) and large hysterectomy volume hospitals (reference ≥200 hysterectomies; 1-100: adjusted odds ratio, 2.26; 95% confidence interval, 1.60-3.20; 101-200: adjusted odds ratio, 1.63; 95% confidence interval, 1.23-2.16) were associated with fewer complications, while patient payer, including Medicare (reference private; adjusted odds ratio, 1.86; 95% confidence interval, 1.33-2.61), Medicaid (adjusted odds ratio, 1.63; 95% confidence interval, 1.30-2.04), and self-pay status (adjusted odds ratio, 2.41; 95% confidence interval, 1.40-4.12), and very-low and low surgeon hysterectomy volume (reference ≥21 cases; 1-5 cases: adjusted odds ratio, 1.73; 95% confidence interval, 1.22-2.47; 6-10 cases: adjusted odds ratio, 1.60; 95% confidence interval, 1.11-2.23) were associated with perioperative complications. Use of minimally invasive hysterectomy for benign indications remains variable, with most patients undergoing open, more morbid procedures. Older and black patients and smaller hospitals are associated with open hysterectomy. Patient race and payer status, hysterectomy approach, and surgeon volume were associated with perioperative complications. Hysterectomies performed for benign indications by high-volume surgeons or by minimally invasive techniques may represent an opportunity to reduce preventable harm. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  14. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  15. Opioid Analgesics and Adverse Outcomes among Hemodialysis Patients.

    PubMed

    Ishida, Julie H; McCulloch, Charles E; Steinman, Michael A; Grimes, Barbara A; Johansen, Kirsten L

    2018-05-07

    Patients on hemodialysis frequently experience pain and may be particularly vulnerable to opioid-related complications. However, data evaluating the risks of opioid use in patients on hemodialysis are limited. Using the US Renal Data System, we conducted a cohort study evaluating the association between opioid use (modeled as a time-varying exposure and expressed in standardized oral morphine equivalents) and time to first emergency room visit or hospitalization for altered mental status, fall, and fracture among 140,899 Medicare-covered adults receiving hemodialysis in 2011. We evaluated risk according to average daily total opioid dose (>60 mg, ≤60 mg, and per 60-mg dose increment) and specific agents (per 60-mg dose increment). The median age was 61 years old, 52% were men, and 50% were white. Sixty-four percent received opioids, and 17% had an episode of altered mental status (15,658 events), fall (7646 events), or fracture (4151 events) in 2011. Opioid use was associated with risk for all outcomes in a dose-dependent manner: altered mental status (lower dose: hazard ratio, 1.28; 95% confidence interval, 1.23 to 1.34; higher dose: hazard ratio, 1.67; 95% confidence interval, 1.56 to 1.78; hazard ratio, 1.29 per 60 mg; 95% confidence interval, 1.26 to 1.33), fall (lower dose: hazard ratio, 1.28; 95% confidence interval, 1.21 to 1.36; higher dose: hazard ratio, 1.45; 95% confidence interval, 1.31 to 1.61; hazard ratio, 1.04 per 60 mg; 95% confidence interval, 1.03 to 1.05), and fracture (lower dose: hazard ratio, 1.44; 95% confidence interval, 1.33 to 1.56; higher dose: hazard ratio, 1.65; 95% confidence interval, 1.44 to 1.89; hazard ratio, 1.04 per 60 mg; 95% confidence interval, 1.04 to 1.05). All agents were associated with a significantly higher hazard of altered mental status, and several agents were associated with a significantly higher hazard of fall and fracture. Opioids were associated with adverse outcomes in patients on hemodialysis, and this risk was present even at lower dosing and for agents that guidelines have recommended for use. Copyright © 2018 by the American Society of Nephrology.

  16. Pregnancy outcome in joint hypermobility syndrome and Ehlers-Danlos syndrome.

    PubMed

    Sundelin, Heléne E K; Stephansson, Olof; Johansson, Kari; Ludvigsson, Jonas F

    2017-01-01

    An increased risk of preterm birth in women with joint hypermobility syndrome or Ehlers-Danlos syndrome is suspected. In this nationwide cohort study from 1997 through 2011, women with either joint hypermobility syndrome or Ehlers-Danlos syndrome or both disorders were identified through the Swedish Patient Register, and linked to the Medical Birth Register. Thereby, 314 singleton births to women with joint hypermobility syndrome/Ehlers-Danlos syndrome before delivery were identified. These births were compared with 1 247 864 singleton births to women without a diagnosis of joint hypermobility syndrome/Ehlers-Danlos syndrome. We used logistic regression, adjusted for maternal age, smoking, parity, and year of birth, to calculate adjusted odds ratios for adverse pregnancy outcomes. Maternal joint hypermobility syndrome/Ehlers-Danlos syndrome was not associated with any of our outcomes: preterm birth (adjusted odds ratio = 0.6, 95% confidence interval 0.3-1.2), preterm premature rupture of membranes (adjusted odds ratio = 0.8; 95% confidence interval 0.3-2.2), cesarean section (adjusted odds ratio = 0.9, 95% confidence interval 0.7-1.2), stillbirth (adjusted odds ratio = 1.1, 95% confidence interval 0.2-7.9), low Apgar score (adjusted odds ratio = 1.6, 95% confidence interval 0.7-3.6), small for gestational age (adjusted odds ratio = 0.9, 95% confidence interval 0.4-1.8) or large for gestational age (adjusted odds ratio = 1.2, 95% confidence interval 0.6-2.1). Examining only women with Ehlers-Danlos syndrome (n = 62), we found a higher risk of induction of labor (adjusted odds ratio = 2.6; 95% confidence interval 1.4-4.6) and amniotomy (adjusted odds ratio = 3.8; 95% confidence interval 2.0-7.1). No excess risks for adverse pregnancy outcome were seen in joint hypermobility syndrome. Women with joint hypermobility syndrome/Ehlers-Danlos syndrome do not seem to be at increased risk of adverse pregnancy outcome. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  17. Comprehension of confidence intervals - development and piloting of patient information materials for people with multiple sclerosis: qualitative study and pilot randomised controlled trial.

    PubMed

    Rahn, Anne C; Backhus, Imke; Fuest, Franz; Riemann-Lorenz, Karin; Köpke, Sascha; van de Roemer, Adrianus; Mühlhauser, Ingrid; Heesen, Christoph

    2016-09-20

    Presentation of confidence intervals alongside information about treatment effects can support informed treatment choices in people with multiple sclerosis. We aimed to develop and pilot-test different written patient information materials explaining confidence intervals in people with relapsing-remitting multiple sclerosis. Further, a questionnaire on comprehension of confidence intervals was developed and piloted. We developed different patient information versions aiming to explain confidence intervals. We used an illustrative example to test three different approaches: (1) short version, (2) "average weight" version and (3) "worm prophylaxis" version. Interviews were conducted using think-aloud and teach-back approaches to test feasibility and analysed using qualitative content analysis. To assess comprehension of confidence intervals, a six-item multiple choice questionnaire was developed and tested in a pilot randomised controlled trial using the online survey software UNIPARK. Here, the average weight version (intervention group) was tested against a standard patient information version on confidence intervals (control group). People with multiple sclerosis were invited to take part using existing mailing-lists of people with multiple sclerosis in Germany and were randomised using the UNIPARK algorithm. Participants were blinded towards group allocation. Primary endpoint was comprehension of confidence intervals, assessed with the six-item multiple choice questionnaire with six points representing perfect knowledge. Feasibility of the patient information versions was tested with 16 people with multiple sclerosis. For the pilot randomised controlled trial, 64 people with multiple sclerosis were randomised (intervention group: n = 36; control group: n = 28). More questions were answered correctly in the intervention group compared to the control group (mean 4.8 vs 3.8, mean difference 1.1 (95 % CI 0.42-1.69), p = 0.002). The questionnaire's internal consistency was moderate (Cronbach's alpha = 0.56). The pilot-phase shows promising results concerning acceptability and feasibility. Pilot randomised controlled trial results indicate that the patient information is well understood and that knowledge gain on confidence intervals can be assessed with a set of six questions. German Clinical Trials Register: DRKS00008561 . Registered 8th of June 2015.

  18. Ethnic Differences in Incidence and Outcomes of Childhood Nephrotic Syndrome.

    PubMed

    Banh, Tonny H M; Hussain-Shamsy, Neesha; Patel, Viral; Vasilevska-Ristovska, Jovanka; Borges, Karlota; Sibbald, Cathryn; Lipszyc, Deborah; Brooke, Josefina; Geary, Denis; Langlois, Valerie; Reddon, Michele; Pearl, Rachel; Levin, Leo; Piekut, Monica; Licht, Christoph P B; Radhakrishnan, Seetha; Aitken-Menezes, Kimberly; Harvey, Elizabeth; Hebert, Diane; Piscione, Tino D; Parekh, Rulan S

    2016-10-07

    Ethnic differences in outcomes among children with nephrotic syndrome are unknown. We conducted a longitudinal study at a single regional pediatric center comparing ethnic differences in incidence from 2001 to 2011 census data and longitudinal outcomes, including relapse rates, time to first relapse, frequently relapsing disease, and use of cyclophosphamide. Among 711 children, 24% were European, 33% were South Asian, 10% were East/Southeast Asian, and 33% were of other origins. Over 10 years, the overall incidence increased from 1.99/100,000 to 4.71/100,000 among children ages 1-18 years old. In 2011, South Asians had a higher incidence rate ratio of 6.61 (95% confidence interval, 3.16 to 15.1) compared with Europeans. East/Southeast Asians had a similar incidence rate ratio (0.76; 95% confidence interval, 0.13 to 2.94) to Europeans. We determined outcomes in 455 children from the three largest ethnic groups with steroid-sensitive disease over a median of 4 years. South Asian and East/Southeast Asian children had significantly lower odds of frequently relapsing disease at 12 months (South Asian: adjusted odds ratio; 0.55; 95% confidence interval, 0.39 to 0.77; East/Southeast Asian: adjusted odds ratio; 0.42; 95% confidence interval, 0.34 to 0.51), fewer subsequent relapses (South Asian: adjusted odds ratio; 0.64; 95% confidence interval, 0.50 to 0.81; East/Southeast Asian: adjusted odds ratio; 0.47; 95% confidence interval, 0.24 to 0.91), lower risk of a first relapse (South Asian: adjusted hazard ratio, 0.74; 95% confidence interval, 0.67 to 0.83; East/Southeast Asian: adjusted hazard ratio, 0.65; 95% CI, 0.63 to 0.68), and lower use of cyclophosphamide (South Asian: adjusted hazard ratio, 0.82; 95% confidence interval, 0.53 to 1.28; East/Southeast Asian: adjusted hazard ratio, 0.54; 95% confidence interval, 0.41 to 0.71) compared with European children. Despite the higher incidence among South Asians, South and East/Southeast Asian children have significantly less complicated clinical outcomes compared with Europeans. Copyright © 2016 by the American Society of Nephrology.

  19. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  20. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  1. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing.

    PubMed

    Oono, Ryoko

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.

  2. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  3. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing

    PubMed Central

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889

  4. Primary repair of penetrating colon injuries: a systematic review.

    PubMed

    Singer, Marc A; Nelson, Richard L

    2002-12-01

    Primary repair of penetrating colon injuries is an appealing management option; however, uncertainty about its safety persists. This study was conducted to compare the morbidity and mortality of primary repair with fecal diversion in the management of penetrating colon injuries by use of a meta-analysis of randomized, prospective trials. We searched for prospective, randomized trials in MEDLINE (1966 to November 2001), the Cochrane Library, and EMBase using the terms colon, penetrating, injury, colostomy, prospective, and randomized. Studies were included if they were randomized, controlled trials that compared the outcomes of primary repair with fecal diversion in the management of penetrating colon injuries. Five studies were included. Reviewers performed data extraction independently. Outcomes evaluated from each trial included mortality, total complications, infectious complications, intra-abdominal infections, wound complications, penetrating abdominal trauma index, and length of stay. Peto odds ratios for combined effect were calculated with a 95 percent confidence interval for each outcome. Heterogeneity was also assessed for each outcome. The penetrating abdominal trauma index of included subjects did not differ significantly between studies. Mortality was not significantly different between groups (odds ratio, 1.70; 95 percent confidence interval, 0.51-5.66). However, total complications (odds ratio, 0.28; 95 percent confidence interval, 0.18-0.42), total infectious complications (odds ratio, 0.41; 95 percent confidence interval, 0.27-0.63), abdominal infections including dehiscence (odds ratio, 0.59; 95 percent confidence interval, 0.38-0.94), abdominal infections excluding dehiscence (odds ratio, 0.52; 95 percent confidence interval, 0.31-0.86), wound complications including dehiscence (odds ratio, 0.55; 95 percent confidence interval, 0.34-0.89), and wound complications excluding dehiscence (odds ratio, 0.43; 95 percent confidence interval, 0.25-0.76) all significantly favored primary repair. Meta-analysis of currently published randomized, controlled trials favors primary repair over fecal diversion for penetrating colon injuries.

  5. Bullying and mental health and suicidal behaviour among 14- to 15-year-olds in a representative sample of Australian children.

    PubMed

    Ford, Rebecca; King, Tania; Priest, Naomi; Kavanagh, Anne

    2017-09-01

    To provide the first Australian population-based estimates of the association between bullying and adverse mental health outcomes and suicidality among Australian adolescents. Analysis of data from 3537 adolescents, aged 14-15 years from Wave 6 of the K-cohort of Longitudinal Study of Australian Children was conducted. We used Poisson and linear regression to estimate associations between bullying type (none, relational-verbal, physical, both types) and role (no role, victim, bully, victim and bully), and mental health (measured by the Strengths and Difficulties Questionnaire, symptoms of anxiety and depression) and suicidality. Adolescents involved in bullying had significantly increased Strengths and Difficulties Questionnaire, depression and anxiety scores in all bullying roles and types. In terms of self-harm and suicidality, bully-victims had the highest risk of self-harm (prevalence rate ratio 4.7, 95% confidence interval [3.26, 6.83]), suicidal ideation (prevalence rate ratio 4.3, 95% confidence interval [2.83, 6.49]), suicidal plan (prevalence rate ratio 4.1, 95% confidence interval [2.54, 6.58]) and attempts (prevalence rate ratio 2.7, 95% confidence interval [1.39, 5.13]), followed by victims then bullies. The experience of both relational-verbal and physical bullying was associated with the highest risk of self-harm (prevalence rate ratio 4.6, 95% confidence interval [3.15, 6.60]), suicidal ideation or plans (prevalence rate ratio 4.6, 95% confidence interval [3.05, 6.95]; and 4.8, 95% confidence interval [3.01, 7.64], respectively) or suicide attempts (prevalence rate ratio 3.5, 95% confidence interval [1.90, 6.30]). This study presents the first national, population-based estimates of the associations between bullying by peers and mental health outcomes in Australian adolescents. The markedly increased risk of poor mental health outcomes, self-harm and suicidal ideation and behaviours among adolescents who experienced bullying highlights the importance of addressing bullying in school settings.

  6. Ethnic variations in morbidity and mortality from lower respiratory tract infections: a retrospective cohort study

    PubMed Central

    Steiner, Markus FC; Cezard, Genevieve; Bansal, Narinder; Fischbacher, Colin; Douglas, Anne; Bhopal, Raj; Sheikh, Aziz

    2015-01-01

    Objective There is evidence of substantial ethnic variations in asthma morbidity and the risk of hospitalisation, but the picture in relation to lower respiratory tract infections is unclear. We carried out an observational study to identify ethnic group differences for lower respiratory tract infections. Design A retrospective, cohort study. Setting Scotland. Participants 4.65 million people on whom information was available from the 2001 census, followed from May 2001 to April 2010. Main outcome measures Hospitalisations and deaths (any time following first hospitalisation) from lower respiratory tract infections, adjusted risk ratios and hazard ratios by ethnicity and sex were calculated. We multiplied ratios and confidence intervals by 100, so the reference Scottish White population’s risk ratio and hazard ratio was 100. Results Among men, adjusted risk ratios for lower respiratory tract infection hospitalisation were lower in Other White British (80, 95% confidence interval 73–86) and Chinese (69, 95% confidence interval 56–84) populations and higher in Pakistani groups (152, 95% confidence interval 136–169). In women, results were mostly similar to those in men (e.g. Chinese 68, 95% confidence interval 56–82), although higher adjusted risk ratios were found among women of the Other South Asians group (145, 95% confidence interval 120–175). Survival (adjusted hazard ratio) following lower respiratory tract infection for Pakistani men (54, 95% confidence interval 39–74) and women (31, 95% confidence interval 18–53) was better than the reference population. Conclusions Substantial differences in the rates of lower respiratory tract infections amongst different ethnic groups in Scotland were found. Pakistani men and women had particularly high rates of lower respiratory tract infection hospitalisation. The reasons behind the high rates of lower respiratory tract infection in the Pakistani community are now required. PMID:26152675

  7. Diagnostic accuracy of the Amsler grid and the preferential hyperacuity perimetry in the screening of patients with age-related macular degeneration: systematic review and meta-analysis.

    PubMed

    Faes, L; Bodmer, N S; Bachmann, L M; Thiel, M A; Schmid, M K

    2014-07-01

    To clarify the screening potential of the Amsler grid and preferential hyperacuity perimetry (PHP) in detecting or ruling out wet age-related macular degeneration (AMD). Medline, Scopus and Web of Science (by citation of reference) were searched. Checking of reference lists of review articles and of included articles complemented electronic searches. Papers were selected, assessed, and extracted in duplicate. Systematic review and meta-analysis. Twelve included studies enrolled 903 patients and allowed constructing 27 two-by-two tables. Twelve tables reported on the Amsler grid and its modifications, twelve tables reported on the PHP, one table assessed the MCPT and two tables assessed the M-charts. All but two studies had a case-control design. The pooled sensitivity of studies assessing the Amsler grid was 0.78 (95% confidence intervals; 0.64-0.87), and the pooled specificity was 0.97 (95% confidence intervals; 0.91-0.99). The corresponding positive and negative likelihood ratios were 23.1 (95% confidence intervals; 8.4-64.0) and 0.23 (95% confidence intervals; 0.14-0.39), respectively. The pooled sensitivity of studies assessing the PHP was 0.85 (95% confidence intervals; 0.80-0.89), and specificity was 0.87 (95% confidence intervals; 0.82-0.91). The corresponding positive and negative likelihood ratios were 6.7 (95% confidence intervals; 4.6-9.8) and 0.17 (95% confidence intervals; 0.13-0.23). No pooling was possible for MCPT and M-charts. Results from small preliminary studies show promising test performance characteristics both for the Amsler grid and PHP to rule out wet AMD in the screening setting. To what extent these findings can be transferred to a real clinic practice still needs to be established.

  8. Erectile dysfunction and fruit/vegetable consumption among diabetic Canadian men.

    PubMed

    Wang, Feng; Dai, Sulan; Wang, Mingdong; Morrison, Howard

    2013-12-01

    To evaluate the association between fruit/vegetable consumption and erectile dysfunction (ED) among Canadian men with diabetes. Data from the 2011 Survey on Living with Chronic Diseases in Canada - Diabetes Component were analyzed using Statistical Analysis System Enterprise Guide (SAS EG). Respondents were asked a series questions related to their sociodemographics, lifestyle, and chronic health conditions. The association between fruit/vegetable consumption and ED was examined using logistic regression after controlling for potential confounding factors. Bootstrap procedure was used to estimate sample distribution and calculate confidence intervals. Overall, 26.2% of respondents reported having ED. The prevalence increased with age and duration of diabetes. Compared with respondents without ED, those with ED were more likely to be obese, smokers, physically inactive, and either divorced, widowed, or separated. Diabetes complications such as nerve damage, circulation problems, and kidney failure or kidney disease were also significantly associated with ED. After controlling for potential confounding factors, a 10% risk reduction of ED was found with each additional daily serving of fruit/vegetable consumed. ED is common among Canadian men with diabetes. ED was highly associated with age, duration of diabetes, obesity, smoking, and the presence of other diabetes-related complications. Fruit and vegetable consumption might have a protective effect against ED. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  9. BagMOOV: A novel ensemble for heart disease prediction bootstrap aggregation with multi-objective optimized voting.

    PubMed

    Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan

    2015-06-01

    Conventional clinical decision support systems are based on individual classifiers or simple combination of these classifiers which tend to show moderate performance. This research paper presents a novel classifier ensemble framework based on enhanced bagging approach with multi-objective weighted voting scheme for prediction and analysis of heart disease. The proposed model overcomes the limitations of conventional performance by utilizing an ensemble of five heterogeneous classifiers: Naïve Bayes, linear regression, quadratic discriminant analysis, instance based learner and support vector machines. Five different datasets are used for experimentation, evaluation and validation. The datasets are obtained from publicly available data repositories. Effectiveness of the proposed ensemble is investigated by comparison of results with several classifiers. Prediction results of the proposed ensemble model are assessed by ten fold cross validation and ANOVA statistics. The experimental evaluation shows that the proposed framework deals with all type of attributes and achieved high diagnosis accuracy of 84.16 %, 93.29 % sensitivity, 96.70 % specificity, and 82.15 % f-measure. The f-ratio higher than f-critical and p value less than 0.05 for 95 % confidence interval indicate that the results are extremely statistically significant for most of the datasets.

  10. Examining the role of vocational rehabilitation on access to care and public health outcomes for people living with HIV/AIDS.

    PubMed

    Conyers, Liza; Boomer, K B

    2014-01-01

    The purpose of this study is to examine the role of vocational rehabilitation services in contributing to the goals of the National HIV/AIDS strategy. Three key research questions are addressed: (a) What is the relationship among factors associated with the use of vocational rehabilitation services for people living with HIV/AIDS? (b) Are the factors associated with use of vocational rehabilitation also associated with access to health care, supplemental employment services and reduced risk of HIV transmission? and (c) What unique role does use of vocational rehabilitation services play in access to health care and HIV prevention? Survey research methods were used to collect data from a broad sample of volunteer respondents who represented diverse racial (37% Black, 37% White, 18% Latino, 7% other), gender (65% male, 34% female, 1% transgender) and sexual orientation (48% heterosexual, 44% gay, 8% bisexual) backgrounds. The fit of the final structural equation model was good (root mean square error of approximation = 0.055), with 90% upper bound of 0.058, Comparative Fit Index = 0.953, TLI = 0.945). Standardized effects with bootstrap confidence intervals are reported. Overall, the findings support the hypothesis that vocational rehabilitation services can play an important role in health and prevention strategies outlined in the National HIV/AIDS strategy.

  11. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting

    PubMed Central

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-01-01

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes. PMID:29453930

  12. Specific agreement on dichotomous outcomes can be calculated for more than two raters.

    PubMed

    de Vet, Henrica C W; Dikmans, Rieky E; Eekhout, Iris

    2017-03-01

    For assessing interrater agreement, the concepts of observed agreement and specific agreement have been proposed. The situation of two raters and dichotomous outcomes has been described, whereas often, multiple raters are involved. We aim to extend it for more than two raters and examine how to calculate agreement estimates and 95% confidence intervals (CIs). As an illustration, we used a reliability study that includes the scores of four plastic surgeons classifying photographs of breasts of 50 women after breast reconstruction into "satisfied" or "not satisfied." In a simulation study, we checked the hypothesized sample size for calculation of 95% CIs. For m raters, all pairwise tables [ie, m (m - 1)/2] were summed. Then, the discordant cells were averaged before observed and specific agreements were calculated. The total number (N) in the summed table is m (m - 1)/2 times larger than the number of subjects (n), in the example, N = 300 compared to n = 50 subjects times m = 4 raters. A correction of n√(m - 1) was appropriate to find 95% CIs comparable to bootstrapped CIs. The concept of observed agreement and specific agreement can be extended to more than two raters with a valid estimation of the 95% CIs. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Combining matched and unmatched control groups in case-control studies.

    PubMed

    le Cessie, Saskia; Nagelkerke, Nico; Rosendaal, Frits R; van Stralen, Karlijn J; Pomp, Elisabeth R; van Houwelingen, Hans C

    2008-11-15

    Multiple control groups in case-control studies are used to control for different sources of confounding. For example, cases can be contrasted with matched controls to adjust for multiple genetic or unknown lifestyle factors and simultaneously contrasted with an unmatched population-based control group. Inclusion of different control groups for a single exposure analysis yields several estimates of the odds ratio, all using only part of the data. Here the authors introduce an easy way to combine odds ratios from several case-control analyses with the same cases. The approach is based upon methods used for meta-analysis but takes into account the fact that the same cases are used and that the estimated odds ratios are therefore correlated. Two ways of estimating this correlation are discussed: sandwich methodology and the bootstrap. Confidence intervals for the pooled estimates and a test for checking whether the odds ratios in the separate case-control studies differ significantly are derived. The performance of the method is studied by simulation and by applying the methods to a large study on risk factors for thrombosis, the MEGA Study (1999-2004), wherein cases with first venous thrombosis were included with a matched control group of partners and an unmatched population-based control group.

  14. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    PubMed

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  15. Mediating the relation between workplace stressors and distress in ID support staff: comparison between the roles of psychological inflexibility and coping styles.

    PubMed

    Kurz, A Solomon; Bethay, J Scott; Ladner-Graham, Jennifer M

    2014-10-01

    The present study examined how different patterns of coping influence psychological distress for staff members in programs serving individuals with intellectual disabilities. With a series of path models, we examined the relative usefulness of constructs (i.e., wishful thinking and psychological inflexibility) from two distinct models of coping (i.e., the transactional model and the psychological flexibility models, respectively) as mediators to explain how workplace stressors lead to psychological distress in staff serving individuals with intellectual disabilities. Analyses involved self-report questionnaires from 128 staff members (84% female; 71% African American) from a large, state-funded residential program for individuals with intellectual and physical disabilities in the southern United States of America. Cross-sectional path models using bootstrapped standard errors and confidence intervals revealed both wishful thinking and psychological inflexibility mediated the relation between workplace stressors and psychological distress when they were included in separate models. However, when both variables were included in a multiple mediator model, only psychological inflexibility remained a significant mediator. The results suggest psychological inflexibility and the psychological flexibility model may be particularly useful for further investigation on the causes and amelioration of workplace-related stress in ID settings. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Ensemble survival tree models to reveal pairwise interactions of variables with time-to-events outcomes in low-dimensional setting.

    PubMed

    Dazard, Jean-Eudes; Ishwaran, Hemant; Mehlotra, Rajeev; Weinberg, Aaron; Zimmerman, Peter

    2018-02-17

    Unraveling interactions among variables such as genetic, clinical, demographic and environmental factors is essential to understand the development of common and complex diseases. To increase the power to detect such variables interactions associated with clinical time-to-events outcomes, we borrowed established concepts from random survival forest (RSF) models. We introduce a novel RSF-based pairwise interaction estimator and derive a randomization method with bootstrap confidence intervals for inferring interaction significance. Using various linear and nonlinear time-to-events survival models in simulation studies, we first show the efficiency of our approach: true pairwise interaction-effects between variables are uncovered, while they may not be accompanied with their corresponding main-effects, and may not be detected by standard semi-parametric regression modeling and test statistics used in survival analysis. Moreover, using a RSF-based cross-validation scheme for generating prediction estimators, we show that informative predictors may be inferred. We applied our approach to an HIV cohort study recording key host gene polymorphisms and their association with HIV change of tropism or AIDS progression. Altogether, this shows how linear or nonlinear pairwise statistical interactions of variables may be efficiently detected with a predictive value in observational studies with time-to-event outcomes.

  17. Characterizing vaccine-associated risks using cubic smoothing splines.

    PubMed

    Brookhart, M Alan; Walker, Alexander M; Lu, Yun; Polakowski, Laura; Li, Jie; Paeglow, Corrie; Puenpatom, Tosmai; Izurieta, Hector; Daniel, Gregory W

    2012-11-15

    Estimating risks associated with the use of childhood vaccines is challenging. The authors propose a new approach for studying short-term vaccine-related risks. The method uses a cubic smoothing spline to flexibly estimate the daily risk of an event after vaccination. The predicted incidence rates from the spline regression are then compared with the expected rates under a log-linear trend that excludes the days surrounding vaccination. The 2 models are then used to estimate the excess cumulative incidence attributable to the vaccination during the 42-day period after vaccination. Confidence intervals are obtained using a model-based bootstrap procedure. The method is applied to a study of known effects (positive controls) and expected noneffects (negative controls) of the measles, mumps, and rubella and measles, mumps, rubella, and varicella vaccines among children who are 1 year of age. The splines revealed well-resolved spikes in fever, rash, and adenopathy diagnoses, with the maximum incidence occurring between 9 and 11 days after vaccination. For the negative control outcomes, the spline model yielded a predicted incidence more consistent with the modeled day-specific risks, although there was evidence of increased risk of diagnoses of congenital malformations after vaccination, possibly because of a "provider visit effect." The proposed approach may be useful for vaccine safety surveillance.

  18. Enhanced understanding of the relationship between erection and satisfaction in ED treatment: application of a longitudinal mediation model.

    PubMed

    Bushmakin, A G; Cappelleri, J C; Symonds, T; Stecher, V J

    2014-01-01

    To apportion the direct effect and the indirect effect (through erections) that sildenafil (vs placebo) has on individual satisfaction and couple satisfaction over time, longitudinal mediation modeling was applied to outcomes on the Sexual Experience Questionnaire. The model included data from weeks 4 and 10 (double-blind phase) and week 16 (open-label phase) of a controlled study. Data from 167 patients with erectile dysfunction (ED) were available for analysis. Estimation of statistical significance was based on bootstrap simulations, which allowed inferences at and between time points. Percentages (and corresponding 95% confidence intervals) for direct and indirect effects of treatment were calculated using the model. For the individual satisfaction and couple satisfaction domains, direct treatment effects were negligible (not statistically significant) whereas indirect treatment effects via the erection domain represented >90% of the treatment effects (statistically significant). Week 4 vs week 10 percentages of direct and indirect effects were not statistically different, indicating that the mediation effects are longitudinally invariant. As there was no placebo arm in the open-label phase, mediation effects at week 16 were not estimable. In conclusion, erection has a crucial role as a mediator in restoring individual satisfaction and couple satisfaction in men with ED treated with sildenafil.

  19. Predictors of Depressive Symptoms Among Israeli Jews and Arabs During the Al Aqsa Intifada: A Population-Based Cohort Study

    PubMed Central

    Tracy, Melissa; Hobfoll, Stevan E.; Canetti–Nisim, Daphna; Galea, Sandro

    2009-01-01

    PURPOSE We sought to assess the predictors of depressive symptoms in a population–based cohort exposed to ongoing and widespread terrorism. METHODS Interviews of a representative sample of adults living in Israel, including both Jews and Arabs, were conducted between August and September 2004, with follow-up interviews taking place between February and April 2005. Censoring weights were estimated to account for differential loss to follow-up. Zero-inflated negative binomial models with bootstrapped confidence intervals were fit to assess predictors of severity of depressive symptoms, assessed using items from the Patient Health Questionnaire. RESULTS A total of 1613 Israeli residents participated in the baseline interview (80.8% Jewish, 49.4% male, mean age 43 years); 840 residents also participated in the follow-up interview. In multivariable models, Israeli Arab ethnicity, lower household income, lower social support, experiencing economic loss from terrorism, experiencing higher levels of psychosocial resource loss, and meeting criteria for post-traumatic stress disorder were significantly associated with increased severity of depressive symptoms. CONCLUSIONS Material deprivation is the primary modifiable risk factor for depressive symptoms in the context of ongoing terrorism. Efforts to minimize ongoing material and economic stressors may mitigate the mental health consequences of ongoing terrorism. PMID:18261923

  20. The mediating role of trauma-related symptoms in the relationship between sexual victimization and physical health symptomatology in undergraduate women.

    PubMed

    Tansill, Erin C; Edwards, Katie M; Kearns, Megan C; Gidycz, Christine A; Calhoun, Karen S

    2012-02-01

    Previous research suggests that posttraumatic stress symptomatology is a partial mediator of the relationship between sexual assault history in adolescence/adulthood and physical health symptomatology (e.g., Eadie, Runtz, & Spencer-Rodgers, 2008). The current study assessed a broader, more inclusive potential mediator, trauma-related symptoms in the relationship between sexual victimization history (including both childhood and adolescent/adulthood sexual victimizations) and physical health symptomatology in a college sample. Participants were 970 young women (M = 18.69, SD = 1.01), who identified mostly as Caucasian (86.7%), from 2 universities who completed a survey packet. Path analysis results provide evidence for trauma-related symptoms as a mediator in the relationship between adolescent/adulthood sexual assault and physical health symptomatology, χ(2) (1, N = 970) = 1.55, p = .21; comparative fit index = 1.00; Tucker-Lewis index = 0.99; root mean square error of approximation = .02, 90% confidence interval [.00, .09], Bollen-Stine bootstrap statistic, p = .29. Childhood sexual abuse was not related to physical health symptomatology, but did predict trauma-related symptoms. Implications of these findings suggest that college health services would benefit from targeted integration of psychiatric and medical services for sexual assault survivors given the overlap of psychological and physical symptoms. Copyright © 2012 International Society for Traumatic Stress Studies.

  1. The Effectiveness of Child Restraint Systems for Children Aged 3 Years or Younger During Motor Vehicle Collisions: 1996 to 2005

    PubMed Central

    Anderson, Craig L.

    2009-01-01

    Objectives. We estimated the effectiveness of child restraints in preventing death during motor vehicle collisions among children 3 years or younger. Methods. We conducted a matched cohort study using Fatality Analysis Reporting System data from 1996 to 2005. We estimated death risk ratios using conditional Poisson regression, bootstrapping, multiple imputation, and a sensitivity analysis of misclassification bias. We examined possible effect modification by selected factors. Results. The estimated death risk ratios comparing child safety seats with no restraint were 0.27 (95% confidence interval [CI] = 0.21, 0.34) for infants, 0.24 (95% CI = 0.19, 0.30) for children aged 1 year, 0.40 (95% CI = 0.32, 0.51) for those aged 2 years, and 0.41 (95% CI = 0.33, 0.52) for those aged 3 years. Estimated safety seat effectiveness was greater during rollover collisions, in rural environments, and in light trucks. We estimated seat belts to be as effective as safety seats in preventing death for children aged 2 and 3 years. Conclusions. Child safety seats are highly effective in reducing the risk of death during severe traffic collisions and generally outperform seat belts. Parents should be encouraged to use child safety seats in favor of seat belts. PMID:19059860

  2. A factor analysis of the SSQ (Speech, Spatial, and Qualities of Hearing Scale)

    PubMed Central

    2014-01-01

    Objective The speech, spatial, and qualities of hearing questionnaire (SSQ) is a self-report test of auditory disability. The 49 items ask how well a listener would do in many complex listening situations illustrative of real life. The scores on the items are often combined into the three main sections or into 10 pragmatic subscales. We report here a factor analysis of the SSQ that we conducted to further investigate its statistical properties and to determine its structure. Design Statistical factor analysis of questionnaire data, using parallel analysis to determine the number of factors to retain, oblique rotation of factors, and a bootstrap method to estimate the confidence intervals. Study sample 1220 people who have attended MRC IHR over the last decade. Results We found three clear factors, essentially corresponding to the three main sections of the SSQ. They are termed “speech understanding”, “spatial perception”, and “clarity, separation, and identification”. Thirty-five of the SSQ questions were included in the three factors. There was partial evidence for a fourth factor, “effort and concentration”, representing two more questions. Conclusions These results aid in the interpretation and application of the SSQ and indicate potential methods for generating average scores. PMID:24417459

  3. The association of greater dispositional optimism with less endogenous pain facilitation is indirectly transmitted through lower levels of pain catastrophizing

    PubMed Central

    Goodin, Burel R.; Glover, Toni L.; Sotolongo, Adriana; King, Christopher D.; Sibille, Kimberly T.; Herbert, Matthew S.; Cruz-Almeida, Yenisel; Sanden, Shelley H.; Staud, Roland; Redden, David T.; Bradley, Laurence A.; Fillingim, Roger B.

    2012-01-01

    Dispositional optimism has been shown to beneficially influence various experimental and clinical pain experiences. One possibility that may account for decreased pain sensitivity among individuals who report greater dispositional optimism is less use of maladaptive coping strategies like pain catastrophizing, a negative cognitive/affective response to pain. An association between dispositional optimism and conditioned pain modulation (CPM), a measure of endogenous pain inhibition, has previously been reported. However, it remains to be determined whether dispositional optimism is also associated with temporal summation (TS), a measure of endogenous pain facilitation. The current study examined whether pain catastrophizing mediated the association between dispositional optimism and TS among 140 older, community-dwelling adults with symptomatic knee osteoarthritis. Individuals completed measures of dispositional optimism and pain catastrophizing. TS was then assessed using a tailored heat pain stimulus on the forearm. Greater dispositional optimism was significantly related to lower levels of pain catastrophizing and TS. Bootstrapped confidence intervals revealed that less pain catastrophizing was a significant mediator of the relation between greater dispositional optimism and diminished TS. These findings support the primary role of personality characteristics such as dispositional optimism in the modulation of pain outcomes by abatement of endogenous pain facilitation and less use of catastrophizing. PMID:23218934

  4. Comparing cost-effectiveness of X-Stop with minimally invasive decompression in lumbar spinal stenosis: a randomized controlled trial.

    PubMed

    Lønne, Greger; Johnsen, Lars Gunnar; Aas, Eline; Lydersen, Stian; Andresen, Hege; Rønning, Roar; Nygaard, Øystein P

    2015-04-15

    Randomized clinical trial with 2-year follow-up. To compare the cost-effectiveness of X-stop to minimally invasive decompression in patients with symptomatic lumbar spinal stenosis. Lumbar spinal stenosis is the most common indication for operative treatment in elderly. Although surgery is more costly than nonoperative treatment, health outcomes for more than 2 years were shown to be significantly better. Surgical treatment with minimally invasive decompression is widely used. X-stop is introduced as another minimally invasive technique showing good results compared with nonoperative treatment. We enrolled 96 patients aged 50 to 85 years, with symptoms of neurogenic intermittent claudication within 250-m walking distance and 1- or 2-level lumbar spinal stenosis, randomized to either minimally invasive decompression or X-stop. Quality-adjusted life-years were based on EuroQol EQ-5D. The hospital unit costs were estimated by means of the top-down approach. Each cost unit was converted into a monetary value by dividing the overall cost by the amount of cost units produced. The analysis of costs and health outcomes is presented by the incremental cost-effectiveness ratio. The study was terminated after a midway interim analysis because of significantly higher reoperation rate in the X-stop group (33%). The incremental cost for X-stop compared with minimally invasive decompression was &OV0556;2832 (95% confidence interval: 1886-3778), whereas the incremental health gain was 0.11 quality-adjusted life-year (95% confidence interval: -0.01 to 0.23). Based on the incremental cost and effect, the incremental cost-effectiveness ratio was &OV0556;25,700. The majority of the bootstrap samples displayed in the northeast corner of the cost-effectiveness plane, giving a 50% likelihood that X-stop is cost-effective at the extra cost of &OV0556;25,700 (incremental cost-effectiveness ratio) for a quality-adjusted life-year. The significantly higher cost of X-stop is mainly due to implant cost and the significantly higher reoperation rate. 2.

  5. Estimated severe pneumococcal disease cases and deaths before and after pneumococcal conjugate vaccine introduction in children younger than 5 years of age in South Africa.

    PubMed

    von Mollendorf, Claire; Tempia, Stefano; von Gottberg, Anne; Meiring, Susan; Quan, Vanessa; Feldman, Charles; Cloete, Jeane; Madhi, Shabir A; O'Brien, Katherine L; Klugman, Keith P; Whitney, Cynthia G; Cohen, Cheryl

    2017-01-01

    Streptococcus pneumoniae is a leading cause of severe bacterial infections globally. A full understanding of the impact of pneumococcal conjugate vaccine (PCV) on pneumococcal disease burden, following its introduction in 2009 in South Africa, can support national policy on PCV use and assist with policy decisions elsewhere. We developed a model to estimate the national burden of severe pneumococcal disease, i.e. disease requiring hospitalisation, pre- (2005-2008) and post-PCV introduction (2012-2013) in children aged 0-59 months in South Africa. We estimated case numbers for invasive pneumococcal disease using data from the national laboratory-based surveillance, adjusted for specimen-taking practices. We estimated non-bacteraemic pneumococcal pneumonia case numbers using vaccine probe study data. To estimate pneumococcal deaths, we applied observed case fatality ratios to estimated case numbers. Estimates were stratified by HIV status to account for the impact of PCV and HIV-related interventions. We assessed how different assumptions affected estimates using a sensitivity analysis. Bootstrapping created confidence intervals. In the pre-vaccine era, a total of approximately 107,600 (95% confidence interval [CI] 83,000-140,000) cases of severe hospitalised pneumococcal disease were estimated to have occurred annually. Following PCV introduction and the improvement in HIV interventions, 41,800 (95% CI 28,000-50,000) severe pneumococcal disease cases were estimated in 2012-2013, a rate reduction of 1,277 cases per 100,000 child-years. Approximately 5000 (95% CI 3000-6000) pneumococcal-related annual deaths were estimated in the pre-vaccine period and 1,900 (95% CI 1000-2500) in 2012-2013, a mortality rate difference of 61 per 100,000 child-years. While a large number of hospitalisations and deaths due to pneumococcal disease still occur among children 0-59 months in South Africa, we found a large reduction in this estimate that is temporally associated with PCV introduction. In HIV-infected individuals the scale-up of other interventions, such as improvements in HIV care, may have also contributed to the declines in pneumococcal burden.

  6. The Use of Radar-Based Products for Deriving Extreme Rainfall Frequencies Using Regional Frequency Analysis with Application in South Louisiana

    NASA Astrophysics Data System (ADS)

    Eldardiry, H. A.; Habib, E. H.

    2014-12-01

    Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.

  7. Liver stiffness measurement by transient elastography predicts late posthepatectomy outcomes in patients undergoing resection for hepatocellular carcinoma.

    PubMed

    Rajakannu, Muthukumarassamy; Cherqui, Daniel; Ciacio, Oriana; Golse, Nicolas; Pittau, Gabriella; Allard, Marc Antoine; Antonini, Teresa Maria; Coilly, Audrey; Sa Cunha, Antonio; Castaing, Denis; Samuel, Didier; Guettier, Catherine; Adam, René; Vibert, Eric

    2017-10-01

    Postoperative hepatic decompensation is a serious complication of liver resection in patients undergoing hepatectomy for hepatocellular carcinoma. Liver fibrosis and clinical significant portal hypertension are well-known risk factors for hepatic decompensation. Liver stiffness measurement is a noninvasive method of evaluating hepatic venous pressure gradient and functional hepatic reserve by estimating hepatic fibrosis. Effectiveness of liver stiffness measurement in predicting persistent postoperative hepatic decompensation has not been investigated. Consecutive patients with resectable hepatocellular carcinoma were recruited prospectively and liver stiffness measurement of nontumoral liver was measured using FibroScan. Hepatic venous pressure gradient was measured intraoperatively by direct puncture of portal vein and inferior vena cava. Hepatic venous pressure gradient ≥10 mm Hg was defined as clinically significant portal hypertension. Primary outcome was persistent hepatic decompensation defined as the presence of at least one of the following: unresolved ascites, jaundice, and/or encephalopathy >3 months after hepatectomy. One hundred and six hepatectomies, including 22 right hepatectomy (20.8%), 3 central hepatectomy (2.8%), 12 left hepatectomy (11.3%), 11 bisegmentectomy (10.4%), 30 unisegmentectomy (28.3%), and 28 partial hepatectomy (26.4%) were performed in patients for hepatocellular carcinoma (84 men and 22 women with median age of 67.5 years; median model for end-stage liver disease score of 8). Ninety-day mortality was 4.7%. Nine patients (8.5%) developed postoperative hepatic decompensation. Multivariate logistic regression bootstrapped at 1,000 identified liver stiffness measurement (P = .001) as the only preoperative predictor of postoperative hepatic decompensation. Area under receiver operating characteristic curve for liver stiffness measurement and hepatic venous pressure gradient was 0.81 (95% confidence interval, 0.506-0.907) and 0.71 (95% confidence interval, 0.646-0.917), respectively. Liver stiffness measurement ≥22 kPa had 42.9% sensitivity and 92.6% specificity and hepatic venous pressure gradient ≥10 mm Hg had 28.6% sensitivity and 96.3% specificity. In selected patients undergoing liver resection for hepatocellular carcinoma, transient elastography is an easy and effective test to predict persistent hepatic decompensation preoperatively. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Willingness to pay for flexible working conditions of people with type 2 diabetes: discrete choice experiments.

    PubMed

    Nexo, M A; Cleal, B; Hagelund, Lise; Willaing, I; Olesen, K

    2017-12-15

    The increasing number of people with chronic diseases challenges workforce capacity. Type 2 diabetes (T2D) can have work-related consequences, such as early retirement. Laws of most high-income countries require workplaces to provide accommodations to enable people with chronic disabilities to manage their condition at work. A barrier to successful implementation of such accommodations can be lack of co-workers' willingness to support people with T2D. This study aimed to examine the willingness to pay (WTP) of people with and without T2D for five workplace initiatives that help individuals with type 2 diabetes manage their diabetes at work. Three samples with employed Danish participants were drawn from existing online panels: a general population sample (n = 600), a T2D sample (n = 693), and a matched sample of people without diabetes (n = 539). Participants completed discrete choice experiments eliciting their WTP (reduction in monthly salary, €/month) for five hypothetical workplace initiatives: part-time job, customized work, extra breaks with pay, and time off for medical consultations with and without pay. WTP was estimated by conditional logits models. Bootstrapping was used to estimate confidence intervals for WTP. There was an overall WTP for all initiatives. Average WTP for all attributes was 34 €/month (95% confidence interval [CI]: 27-43] in the general population sample, 32 €/month (95% CI: 26-38) in the T2D sample, and 55 €/month (95% CI: 43-71) in the matched sample. WTP for additional breaks with pay was considerably lower than for the other initiatives in all samples. People with T2D had significantly lower WTP than people without diabetes for part-time work, customized work, and time off without pay, but not for extra breaks or time off with pay. For people with and without T2D, WTP was present for initiatives that could improve management of diabetes at the workplace. WTP was lowest among people with T2D. Implementation of these initiatives seems feasible and may help unnecessary exclusion of people with T2D from work.

  9. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

    PubMed

    Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

    1998-09-01

    To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

  10. Intakes of magnesium, potassium, and calcium and the risk of stroke among men.

    PubMed

    Adebamowo, Sally N; Spiegelman, Donna; Flint, Alan J; Willett, Walter C; Rexrode, Kathryn M

    2015-10-01

    Intakes of magnesium, potassium, and calcium have been inversely associated with the incidence of hypertension, a known risk factor for stroke. However, only a few studies have examined intakes of these cations in relation to risk of stroke. The aim of this study was to investigate whether high intake of magnesium, potassium, and calcium is associated with reduced stroke risk among men. We prospectively examined the associations between intakes of magnesium, potassium, and calcium from diet and supplements, and the risk of incident stroke among 42 669 men in the Health Professionals Follow-up Study, aged 40 to 75 years and free of diagnosed cardiovascular disease and cancer at baseline in 1986. We calculated the hazard ratio of total, ischemic, and haemorrhagic strokes by quintiles of each cation intake, and of a combined dietary score of all three cations, using multivariate Cox proportional hazard models. During 24 years of follow-up, 1547 total stroke events were documented. In multivariate analyses, the relative risks and 95% confidence intervals of total stroke for men in the highest vs. lowest quintile were 0·87 (95% confidence interval, 0·74-1·02; P, trend = 0·04) for dietary magnesium, 0·89 (95% confidence interval, 0·76-1·05; P, trend = 0·10) for dietary potassium, and 0·89 (95% confidence interval, 0·75-1·04; P, trend = 0·25) for dietary calcium intake. The relative risk of total stroke for men in the highest vs. lowest quintile was 0·74 (95% confidence interval, 0·59-0·93; P, trend = 0·003) for supplemental magnesium, 0·66 (95% confidence interval, 0·50-0·86; P, trend = 0·002) for supplemental potassium, and 1·01 (95% confidence interval, 0·84-1·20; P, trend = 0·83) for supplemental calcium intake. For total intake (dietary and supplemental), the relative risk of total stroke for men in the highest vs. lowest quintile was 0·83 (95% confidence interval, 0·70-0·99; P, trend = 0·04) for magnesium, 0·88 (95% confidence interval, 0·75-4; P, trend = 6) for potassium, and 3 (95% confidence interval, 79-09; P, trend = 84) for calcium. Men in the highest quintile for a combined dietary score of all three cations had a multivariate relative risk of 0·79 (95% confidence interval, 0·67-0·92; P, trend = 0·008) for total stroke, compared with those in the lowest. A diet rich in magnesium, potassium, and calcium may contribute to reduced risk of stroke among men. Because of significant collinearity, the independent contribution of each cation is difficult to define. © 2015 World Stroke Organization.

  11. Neuraxial analgesia to increase the success rate of external cephalic version: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Magro-Malosso, Elena Rita; Saccone, Gabriele; Di Tommaso, Mariarosaria; Mele, Michele; Berghella, Vincenzo

    2016-09-01

    External cephalic version is a medical procedure in which the fetus is externally manipulated to assume the cephalic presentation. The use of neuraxial analgesia for facilitating the version has been evaluated in several randomized clinical trials, but its potential effects are still controversial. The objective of the study was to evaluate the effectiveness of neuraxial analgesia as an intervention to increase the success rate of external cephalic version. Searches were performed in electronic databases with the use of a combination of text words related to external cephalic version and neuraxial analgesia from the inception of each database to January 2016. We included all randomized clinical trials of women, with a gestational age ≥36 weeks and breech or transverse fetal presentation, undergoing external cephalic version who were randomized to neuraxial analgesia, including spinal, epidural, or combined spinal-epidural techniques (ie, intervention group) or to a control group (either intravenous analgesia or no treatment). The primary outcome was the successful external cephalic version. The summary measures were reported as relative risk or as mean differences with a 95% confidence interval. Nine randomized clinical trials (934 women) were included in this review. Women who received neuraxial analgesia had a significantly higher incidence of successful external cephalic version (58.4% vs 43.1%; relative risk, 1.44, 95% confidence interval, 1.27-1.64), cephalic presentation in labor (55.1% vs 40.2%; relative risk, 1.37, 95% confidence interval, 1.08-1.73), and vaginal delivery (54.0% vs 44.6%; relative risk, 1.21, 95% confidence interval, 1.04-1.41) compared with those who did not. Women who were randomized to the intervention group also had a significantly lower incidence of cesarean delivery (46.0% vs 55.3%; relative risk, 0.83, 95% confidence interval, 0.71-0.97), maternal discomfort (1.2% vs 9.3%; relative risk, 0.12, 95% confidence interval, 0.02-0.99), and lower pain, assessed by the visual analog scale pain score (mean difference, -4.52 points, 95% confidence interval, -5.35 to 3.69) compared with the control group. The incidences of emergency cesarean delivery (1.6% vs 2.5%; relative risk, 0.63, 95% confidence interval, 0.24-1.70), transient bradycardia (11.8% vs 8.3%; relative risk, 1.42, 95% confidence interval, 0.72-2.80), nonreassuring fetal testing, excluding transient bradycardia, after external cephalic version (6.9% vs 7.4%; relative risk, 0.93, 95% confidence interval, 0.53-1.64), and abruption placentae (0.4% vs 0.4%; relative risk, 1.01, 95% confidence interval, 0.06-16.1) were similar. Administration of neuraxial analgesia significantly increases the success rate of external cephalic version among women with malpresentation at term or late preterm, which then significantly increases the incidence of vaginal delivery. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Add-On Antihypertensive Medications to Angiotensin-Aldosterone System Blockers in Diabetes: A Comparative Effectiveness Study.

    PubMed

    Schroeder, Emily B; Chonchol, Michel; Shetterly, Susan M; Powers, J David; Adams, John L; Schmittdiel, Julie A; Nichols, Gregory A; O'Connor, Patrick J; Steiner, John F

    2018-05-07

    In individuals with diabetes, the comparative effectiveness of add-on antihypertensive medications added to an angiotensin-converting enzyme inhibitor or angiotensin II receptor blocker on the risk of significant kidney events is unknown. We used an observational, multicenter cohort of 21,897 individuals with diabetes to compare individuals who added β -blockers, dihydropyridine calcium channel blockers, loop diuretics, or thiazide diuretics to angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers. We examined the hazard of significant kidney events, cardiovascular events, and death using Cox proportional hazard models with propensity score weighting. The composite significant kidney event end point was defined as the first occurrence of a ≥30% decline in eGFR to an eGFR<60 ml/min per 1.73 m 2 , initiation of dialysis, or kidney transplant. The composite cardiovascular event end point was defined as the first occurrence of hospitalization for acute myocardial infarction, acute coronary syndrome, stroke, or congestive heart failure; coronary artery bypass grafting; or percutaneous coronary intervention, and it was only examined in those free of cardiovascular disease at baseline. Over a maximum of 5 years, there were 4707 significant kidney events, 1498 deaths, and 818 cardiovascular events. Compared with thiazide diuretics, hazard ratios for significant kidney events for β -blockers, calcium channel blockers, and loop diuretics were 0.81 (95% confidence interval, 0.74 to 0.89), 0.67 (95% confidence interval, 0.58 to 0.78), and 1.19 (95% confidence interval, 1.00 to 1.41), respectively. Compared with thiazide diuretics, hazard ratios of mortality for β -blockers, calcium channel blockers, and loop diuretics were 1.19 (95% confidence interval, 0.97 to 1.44), 0.73 (95% confidence interval, 0.52 to 1.03), and 1.67 (95% confidence interval, 1.31 to 2.13), respectively. Compared with thiazide diuretics, hazard ratios of cardiovascular events for β -blockers, calcium channel blockers, and loop diuretics compared with thiazide diuretics were 1.65 (95% confidence interval, 1.39 to 1.96), 1.05 (95% confidence interval, 0.80 to 1.39), and 1.55 (95% confidence interval, 1.05 to 2.27), respectively. Compared with thiazide diuretics, calcium channel blockers were associated with a lower risk of significant kidney events and a similar risk of cardiovascular events. Copyright © 2018 by the American Society of Nephrology.

  13. Maternal steroid therapy for fetuses with second-degree immune-mediated congenital atrioventricular block: a systematic review and meta-analysis.

    PubMed

    Ciardulli, Andrea; D'Antonio, Francesco; Magro-Malosso, Elena R; Manzoli, Lamberto; Anisman, Paul; Saccone, Gabriele; Berghella, Vincenzo

    2018-03-07

    To explore the effect of maternal fluorinated steroid therapy on fetuses affected by second-degree immune-mediated congenital atrioventricular block. Studies reporting the outcome of fetuses with second-degree immune-mediated congenital atrioventricular block diagnosed on prenatal ultrasound and treated with fluorinated steroids compared with those not treated were included. The primary outcome was the overall progression of congenital atrioventricular block to either continuous or intermittent third-degree congenital atrioventricular block at birth. Meta-analyses of proportions using random effect model and meta-analyses using individual data random-effect logistic regression were used. Five studies (71 fetuses) were included. The progression rate to congenital atrioventricular block at birth in fetuses treated with steroids was 52% (95% confidence interval 23-79) and in fetuses not receiving steroid therapy 73% (95% confidence interval 39-94). The overall rate of regression to either first-degree, intermittent first-/second-degree or sinus rhythm in fetuses treated with steroids was 25% (95% confidence interval 12-41) compared with 23% (95% confidence interval 8-44) in those not treated. Stable (constant) second-degree congenital atrioventricular block at birth was present in 11% (95% confidence interval 2-27) of cases in the treated group and in none of the newborns in the untreated group, whereas complete regression to sinus rhythm occurred in 21% (95% confidence interval 6-42) of fetuses receiving steroids vs. 9% (95% confidence interval 0-41) of those untreated. There is still limited evidence as to the benefit of administered fluorinated steroids in terms of affecting outcome of fetuses with second-degree immune-mediated congenital atrioventricular block. © 2018 Nordic Federation of Societies of Obstetrics and Gynecology.

  14. Active management of the third stage of labor with and without controlled cord traction: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Du, Yongming; Ye, Man; Zheng, Feiyun

    2014-07-01

    To determine the specific effect of controlled cord traction in the third stage of labor in the prevention of postpartum hemorrhage. We searched PubMed, Scopus and Web of Science (inception to 30 October 2013). Randomized controlled trials comparing controlled cord traction with hands-off management in the third stage of labor were included. Five randomized controlled trials involving a total of 30 532 participants were eligible. No significant difference was found between controlled cord traction and hands-off management groups with respect to the incidence of severe postpartum hemorrhage (relative risk 0.91, 95% confidence interval 0.77-1.08), need for blood transfusion (relative risk 0.96, 95% confidence interval 0.69-1.33) or therapeutic uterotonics (relative risk 0.94, 95% confidence interval 0.88-1.01). However, controlled cord traction reduced the incidence of postpartum hemorrhage in general (relative risk 0.93, 95% confidence interval 0.87-0.99; number-needed-to-treat 111, 95% confidence interval 61-666), as well manual removal of the placenta (relative risk 0.70, 95% confidence interval 0.58-0.84) and duration of the third stage of labor (mean difference -3.20, 95% confidence interval -3.21 to -3.19). Controlled cord traction appears to reduce the risk of any postpartum hemorrhage in a general sense, as well as manual removal of the placenta and the duration of the third stage of labor. However, the reduction in the occurrence of severe postpartum hemorrhage, need for additional uterotonics and blood transfusion is not statistically significant. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  15. Loss of DPC4/SMAD4 expression in primary gastrointestinal neuroendocrine tumors is associated with cancer-related death after resection.

    PubMed

    Roland, Christina L; Starker, Lee F; Kang, Y; Chatterjee, Deyali; Estrella, Jeannelyn; Rashid, Asif; Katz, Matthew H; Aloia, Thomas A; Lee, Jeffrey E; Dasari, Arvind; Yao, James C; Fleming, Jason B

    2017-03-01

    Gastrointestinal neuroendocrine tumors have frequent loss of DPC4/SMAD4 expression, a known tumor suppressor. The impact of SMAD4 loss on gastrointestinal neuroendocrine tumors aggressiveness or cancer-related patient outcomes is not defined. We examined the expression of SMAD4 in resected gastrointestinal neuroendocrine tumors and its impact on oncologic outcomes. Patients who underwent complete curative operative resection of gastrointestinal neuroendocrine tumors were identified retrospectively (n = 38). Immunohistochemical staining for SMAD4 expression was scored by a blinded pathologist and correlated with clinicopathologic features and oncologic outcomes. Twenty-nine percent of the gastrointestinal neuroendocrine tumors were SMAD4-negative and 71% SMAD4-positive. Median overall survival was 155 months (95% confidence interval, 102-208 months). Loss of SMAD4 was associated with both decreased median disease-free survival (28 months; 95% confidence interval, 16-40) months compared with 223 months (95% confidence interval, 3-443 months) for SMAD4-positive patients (P = .03) and decreased median disease-specific survival (SMAD4: 137 [95% confidence interval, 81-194] months versus SMAD4-positive: 204 [95% confidence interval, 143-264] months; P = .04). This translated into a decrease in median overall survival (SMAD4-negative: 125 (95% confidence interval, 51-214) months versus SMAD4-positive: 185 (95% confidence interval, 138-232) months; P = .02). Consistent with the known biology of the DPC4/SMAD4 gene, an absence of its protein expression in primary gastrointestinal neuroendocrine tumors was negatively associated with outcomes after curative operative resection. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  17. Reliability of the identification of the systemic inflammatory response syndrome in critically ill infants and children.

    PubMed

    Juskewitch, Justin E; Prasad, Swati; Salas, Carlos F Santillan; Huskins, W Charles

    2012-01-01

    To assess interobserver reliability of the identification of episodes of the systemic inflammatory response syndrome in critically ill hospitalized infants and children. Retrospective, cross-sectional study of the application of the 2005 consensus definition of systemic inflammatory response syndrome in infants and children by two independent, trained reviewers using information in the electronic medical record. Eighteen-bed pediatric multidisciplinary medical/surgical pediatric intensive care unit. A randomly selected sample of children admitted consecutively to the pediatric intensive care unit between May 1 and September 30, 2009. None. Sixty infants and children were selected from a total of 343 admitted patients. Their median age was 3.9 yrs (interquartile range, 1.5-12.7), 57% were female, and 68% were Caucasian. Nineteen (32%) children were identified by both reviewers as having an episode of systemic inflammatory response syndrome (88% agreement, 95% confidence interval 78-94; κ = 0.75, 95% confidence interval 0.59-0.92). Among these 19 children, agreement between the reviewers for individual systemic inflammatory response syndrome criteria was: temperature (84%, 95% confidence interval 60-97); white blood cell count (89%, 95% confidence interval 67-99); respiratory rate (84%, 95% confidence interval 60-97); and heart rate (68%, 95% confidence interval 33-87). Episodes of systemic inflammatory response syndrome in critically ill infants and children can be identified reproducibly using the consensus definition.

  18. What was different about exposures reported by male Australian Gulf War veterans for the 1991 Persian Gulf War, compared with exposures reported for other deployments?

    PubMed

    Glass, Deborah C; Sim, Malcolm R; Kelsall, Helen L; Ikin, Jill F; McKenzie, Dean; Forbes, Andrew; Ittak, Peter

    2006-07-01

    This study identified chemical and environmental exposures specifically associated with the 1991 Persian Gulf War. Exposures were self-reported in a postal questionnaire, in the period of 2000-2002, by 1,424 Australian male Persian Gulf War veterans in relation to their 1991 Persian Gulf War deployment and by 625 Persian Gulf War veterans and 514 members of a military comparison group in relation to other active deployments. Six of 28 investigated exposures were experienced more frequently during the Persian Gulf War than during other deployments; these were exposure to smoke (odds ratio [OR], 4.4; 95% confidence interval, 3.0-6.6), exposure to dust (OR, 3.7; 95% confidence interval, 2.6-5.3), exposure to chemical warfare agents (OR, 3.9; 95% confidence interval, 2.1-7.9), use of respiratory protective equipment (OR, 13.6; 95% confidence interval, 7.6-26.8), use of nuclear, chemical, and biological protective suits (OR, 8.9; 95% confidence interval, 5.4-15.4), and entering/inspecting enemy equipment (OR, 3.1; 95% confidence interval, 2.1-4.8). Other chemical and environmental exposures were not specific to the Persian Gulf War deployment but were also reported in relation to other deployments. The number of exposures reported was related to service type and number of deployments but not to age or rank.

  19. Statin therapy in lower limb peripheral arterial disease: Systematic review and meta-analysis.

    PubMed

    Antoniou, George A; Fisher, Robert K; Georgiadis, George S; Antoniou, Stavros A; Torella, Francesco

    2014-11-01

    To investigate and analyse the existing evidence supporting statin therapy in patients with lower limb atherosclerotic arterial disease. A systematic search of electronic information sources was undertaken to identify studies comparing cardiovascular outcomes in patients with lower limb peripheral arterial disease treated with a statin and those not receiving a statin. Estimates were combined applying fixed- or random-effects models. Twelve observational cohort studies and two randomised trials reporting 19,368 patients were selected. Statin therapy was associated with reduced all-cause mortality (odds ratio 0.60, 95% confidence interval 0.46-0.78) and incidence of stroke (odds ratio 0.77, 95% confidence interval 0.67-0.89). A trend towards improved cardiovascular mortality (odds ratio 0.62, 95% confidence interval 0.35-1.11), myocardial infarction (odds ratio 0.62, 95% confidence interval 0.38-1.01), and the composite of death/myocardial infarction/stroke (odds ratio 0.91, 95% confidence interval 0.81-1.03), was identified. Meta-analyses of studies performing adjustments showed decreased all-cause mortality in statin users (hazard ratio 0.77, 95% confidence interval 0.68-0.86). Evidence supporting statins' protective role in patients with lower limb peripheral arterial disease is insufficient. Statin therapy seems to be effective in reducing all-cause mortality and the incidence cerebrovascular events in patients diagnosed with peripheral arterial disease. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  1. Bootstrap study of genome-enabled prediction reliabilities using haplotype blocks across Nordic Red cattle breeds.

    PubMed

    Cuyabano, B C D; Su, G; Rosa, G J M; Lund, M S; Gianola, D

    2015-10-01

    This study compared the accuracy of genome-enabled prediction models using individual single nucleotide polymorphisms (SNP) or haplotype blocks as covariates when using either a single breed or a combined population of Nordic Red cattle. The main objective was to compare predictions of breeding values of complex traits using a combined training population with haplotype blocks, with predictions using a single breed as training population and individual SNP as predictors. To compare the prediction reliabilities, bootstrap samples were taken from the test data set. With the bootstrapped samples of prediction reliabilities, we built and graphed confidence ellipses to allow comparisons. Finally, measures of statistical distances were used to calculate the gain in predictive ability. Our analyses are innovative in the context of assessment of predictive models, allowing a better understanding of prediction reliabilities and providing a statistical basis to effectively calibrate whether one prediction scenario is indeed more accurate than another. An ANOVA indicated that use of haplotype blocks produced significant gains mainly when Bayesian mixture models were used but not when Bayesian BLUP was fitted to the data. Furthermore, when haplotype blocks were used to train prediction models in a combined Nordic Red cattle population, we obtained up to a statistically significant 5.5% average gain in prediction accuracy, over predictions using individual SNP and training the model with a single breed. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. A comparison of statistical methods for evaluating matching performance of a biometric identification device: a preliminary report

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.; Hawley, Anne; Livingstone, Katie; Mramba, Nona

    2004-08-01

    Confidence intervals are an important way to assess and estimate a parameter. In the case of biometric identification devices, several approaches to confidence intervals for an error rate have been proposed. Here we evaluate six of these methods. To complete this evaluation, we simulate data from a wide variety of parameter values. This data are simulated via a correlated binary distribution. We then determine how well these methods do at what they say they do: capturing the parameter inside the confidence interval. In addition, the average widths of the various confidence intervals are recorded for each set of parameters. The complete results of this simulation are presented graphically for easy comparison. We conclude by making a recommendation regarding which method performs best.

  3. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  4. Expression of Proteins Involved in Epithelial-Mesenchymal Transition as Predictors of Metastasis and Survival in Breast Cancer Patients

    DTIC Science & Technology

    2013-11-01

    Ptrend 0.78 0.62 0.75 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of node...Ptrend 0.71 0.67 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of high-grade tumors... logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for the associations between each of the seven SNPs and

  5. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  6. Statistical properties of four effect-size measures for mediation models.

    PubMed

    Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C

    2018-02-01

    This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.

  7. Assessing equity of healthcare utilization in rural China: results from nationally representative surveys from 1993 to 2008

    PubMed Central

    2013-01-01

    Background The phenomenon of inequitable healthcare utilization in rural China interests policymakers and researchers; however, the inequity has not been actually measured to present the magnitude and trend using nationally representative data. Methods Based on the National Health Service Survey (NHSS) in 1993, 1998, 2003, and 2008, the Probit model with the probability of outpatient visit and the probability of inpatient visit as the dependent variables is applied to estimate need-predicted healthcare utilization. Furthermore, need-standardized healthcare utilization is assessed through indirect standardization method. Concentration index is measured to reflect income-related inequity of healthcare utilization. Results The concentration index of need-standardized outpatient utilization is 0.0486[95% confidence interval (0.0399, 0.0574)], 0.0310[95% confidence interval (0.0229, 0.0390)], 0.0167[95% confidence interval (0.0069, 0.0264)] and −0.0108[95% confidence interval (−0.0213, -0.0004)] in 1993, 1998, 2003 and 2008, respectively. For inpatient service, the concentration index is 0.0529[95% confidence interval (0.0349, 0.0709)], 0.1543[95% confidence interval (0.1356, 0.1730)], 0.2325[95% confidence interval (0.2132, 0.2518)] and 0.1313[95% confidence interval (0.1174, 0.1451)] in 1993, 1998, 2003 and 2008, respectively. Conclusions Utilization of both outpatient and inpatient services was pro-rich in rural China with the exception of outpatient service in 2008. With the same needs for healthcare, rich rural residents utilized more healthcare service than poor rural residents. Compared to utilization of outpatient service, utilization of inpatient service was more inequitable. Inequity of utilization of outpatient service reduced gradually from 1993 to 2008; meanwhile, inequity of inpatient service utilization increased dramatically from 1993 to 2003 and decreased significantly from 2003 to 2008. Recent attempts in China to increase coverage of insurance and primary healthcare could be a contributing factor to counteract the inequity of outpatient utilization, but better benefit packages and delivery strategies still need to be tested and scaled up to reduce future inequity in inpatient utilization in rural China. PMID:23688260

  8. Exposure to power frequency electric fields and the risk of childhood cancer in the UK

    PubMed Central

    Skinner, J; Mee, T J; Blackwell, R P; Maslanyj, M P; Simpson, J; Allen, S G; Day, N E

    2002-01-01

    The United Kingdom Childhood Cancer Study, a population-based case–control study covering the whole of Great Britain, incorporated a pilot study measuring electric fields. Measurements were made in the homes of 473 children who were diagnosed with a malignant neoplasm between 1992 and 1996 and who were aged 0–14 at diagnosis, together with 453 controls matched on age, sex and geographical location. Exposure assessments comprised resultant spot measurements in the child's bedroom and the family living-room. Temporal stability of bedroom fields was investigated through continuous logging of the 48-h vertical component at the child's bedside supported by repeat spot measurements. The principal exposure metric used was the mean of the pillow and bed centre measurements. For the 273 cases and 276 controls with fully validated measures, comparing those with a measured electric field exposure ⩾20 V m−1 to those in a reference category of exposure <10 V m−1, odds ratios of 1.31 (95% confidence interval 0.68–2.54) for acute lymphoblastic leukaemia, 1.32 (95% confidence interval 0.73–2.39) for total leukaemia, 2.12 (95% confidence interval 0.78–5.78) for central nervous system cancers and 1.26 (95% confidence interval 0.77–2.07) for all malignancies were obtained. When considering the 426 cases and 419 controls with no invalid measures, the corresponding odds ratios were 0.86 (95% confidence interval 0.49–1.51) for acute lymphoblastic leukaemia, 0.93 (95% confidence interval 0.56–1.54) for total leukaemia, 1.43 (95% confidence interval 0.68–3.02) for central nervous system cancers and 0.90 (95% confidence interval 0.59–1.35) for all malignancies. With exposure modelled as a continuous variable, odds ratios for an increase in the principal metric of 10 V m−1 were close to unity for all disease categories, never differing significantly from one. British Journal of Cancer (2002) 87, 1257–1266. doi:10.1038/sj.bjc.6600602 www.bjcancer.com © 2002 Cancer Research UK PMID:12439715

  9. A risk score for predicting coronary artery disease in women with angina pectoris and abnormal stress test finding.

    PubMed

    Lo, Monica Y; Bonthala, Nirupama; Holper, Elizabeth M; Banks, Kamakki; Murphy, Sabina A; McGuire, Darren K; de Lemos, James A; Khera, Amit

    2013-03-15

    Women with angina pectoris and abnormal stress test findings commonly have no epicardial coronary artery disease (CAD) at catheterization. The aim of the present study was to develop a risk score to predict obstructive CAD in such patients. Data were analyzed from 337 consecutive women with angina pectoris and abnormal stress test findings who underwent cardiac catheterization at our center from 2003 to 2007. Forward selection multivariate logistic regression analysis was used to identify the independent predictors of CAD, defined by ≥50% diameter stenosis in ≥1 epicardial coronary artery. The independent predictors included age ≥55 years (odds ratio 2.3, 95% confidence interval 1.3 to 4.0), body mass index <30 kg/m(2) (odds ratio 1.9, 95% confidence interval 1.1 to 3.1), smoking (odds ratio 2.6, 95% confidence interval 1.4 to 4.8), low high-density lipoprotein cholesterol (odds ratio 2.9, 95% confidence interval 1.5 to 5.5), family history of premature CAD (odds ratio 2.4, 95% confidence interval 1.0 to 5.7), lateral abnormality on stress imaging (odds ratio 2.8, 95% confidence interval 1.5 to 5.5), and exercise capacity <5 metabolic equivalents (odds ratio 2.4, 95% confidence interval 1.1 to 5.6). Assigning each variable 1 point summed to constitute a risk score, a graded association between the score and prevalent CAD (ptrend <0.001). The risk score demonstrated good discrimination with a cross-validated c-statistic of 0.745 (95% confidence interval 0.70 to 0.79), and an optimized cutpoint of a score of ≤2 included 62% of the subjects and had a negative predictive value of 80%. In conclusion, a simple clinical risk score of 7 characteristics can help differentiate those more or less likely to have CAD among women with angina pectoris and abnormal stress test findings. This tool, if validated, could help to guide testing strategies in women with angina pectoris. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Long-term socio-economic consequences and health care costs of poliomyelitis: a historical cohort study involving 3606 polio patients.

    PubMed

    Nielsen, Nete Munk; Kay, Lise; Wanscher, Benedikte; Ibsen, Rikke; Kjellberg, Jakob; Jennum, Poul

    2016-06-01

    Worldwide 10-20 million individuals are living with disabilities after acute poliomyelitis. However, very little is known about the socio-economic consequences and health care costs of poliomyelitis. We carried out a historical register-based study including 3606 individuals hospitalised for poliomyelitis in Copenhagen, Denmark 1940-1954, and 13,795 age and gender-matched Danes. Participants were followed from 1980 until 2012, and family, socio-economic conditions and health care costs were evaluated in different age groups using chi-squared tests, boot-strapped t tests or hazard ratios (HR) calculated in Cox-regression models. The analyses were performed separately for paralytic and non-paralytic polio survivors and their controls, respectively. Compared with controls a higher percentage of paralytic polio survivors remained childless, whereas no difference was observed for non-paralytic polio survivors. The educational level among paralytic as well as non-paralytic polio survivors was higher than that among their controls, employment rate at the ages of 40, 50 and 60 years was slightly lower, whereas total income in the age intervals of 31-40, 41-50 and 51-60 years were similar to controls. Paralytic and non-paralytic polio survivors had a 2.5 [HR = 2.52 (95 % confidence interval (CI); 2.29-2.77)] and 1.4 [HR = 1.35 (95 % CI; 1.23-1.49)]-fold higher risk, respectively, of receiving disability pension compared with controls. Personal health care costs were considerably higher in all age groups in both groups of polio survivors. Individuals with a history of poliomyelitis are well educated, have a slightly lower employment rate, an income similar to controls, but a considerably higher cost in the health care system.

  11. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  12. The Role of Short-Term Memory Capacity and Task Experience for Overconfidence in Judgment under Uncertainty

    ERIC Educational Resources Information Center

    Hansson, Patrik; Juslin, Peter; Winman, Anders

    2008-01-01

    Research with general knowledge items demonstrates extreme overconfidence when people estimate confidence intervals for unknown quantities, but close to zero overconfidence when the same intervals are assessed by probability judgment. In 3 experiments, the authors investigated if the overconfidence specific to confidence intervals derives from…

  13. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  14. Towards the estimation of effect measures in studies using respondent-driven sampling.

    PubMed

    Rotondi, Michael A

    2014-06-01

    Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.

  15. Generalized anxiety disorder prevalence and comorbidity with depression in coronary heart disease: a meta-analysis.

    PubMed

    Tully, Phillip J; Cosh, Suzanne M

    2013-12-01

    Generalized anxiety disorder prevalence and comorbidity with depression in coronary heart disease patients remain unquantified. Systematic searching of Medline, Embase, SCOPUS and PsycINFO databases revealed 1025 unique citations. Aggregate generalized anxiety disorder prevalence (12 studies, N = 3485) was 10.94 per cent (95% confidence interval: 7.8-13.99) and 13.52 per cent (95% confidence interval: 8.39-18.66) employing Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria (random effects). Lifetime generalized anxiety disorder prevalence was 25.80 per cent (95% confidence interval: 20.84-30.77). In seven studies, modest correlation was evident between generalized anxiety disorder and depression, Fisher's Z = .30 (95% confidence interval: .19-.42), suggesting that each psychiatric disorder is best conceptualized as contributing unique variance to coronary heart disease prognosis.

  16. Autonomous motivation mediates the relation between goals for physical activity and physical activity behavior in adolescents.

    PubMed

    Duncan, Michael J; Eyre, Emma Lj; Bryant, Elizabeth; Seghers, Jan; Galbraith, Niall; Nevill, Alan M

    2017-04-01

    Overall, 544 children (mean age ± standard deviation = 14.2 ± .94 years) completed self-report measures of physical activity goal content, behavioral regulations, and physical activity behavior. Body mass index was determined from height and mass. The indirect effect of intrinsic goal content on physical activity was statistically significant via autonomous ( b = 162.27; 95% confidence interval [89.73, 244.70]), but not controlled motivation ( b = 5.30; 95% confidence interval [-39.05, 45.16]). The indirect effect of extrinsic goal content on physical activity was statistically significant via autonomous ( b = 106.25; 95% confidence interval [63.74, 159.13]) but not controlled motivation ( b = 17.28; 95% confidence interval [-31.76, 70.21]). Weight status did not alter these findings.

  17. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  18. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  19. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  20. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

    PubMed

    Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

    2015-03-01

    We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.

  1. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  2. On the thresholds in modeling of high flows via artificial neural networks - A bootstrapping analysis

    NASA Astrophysics Data System (ADS)

    Panagoulia, D.; Trichakis, I.

    2012-04-01

    Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori knowledge of the thresholds.

  3. Does blood transfusion affect intermediate survival after coronary artery bypass surgery?

    PubMed

    Mikkola, R; Heikkinen, J; Lahtinen, J; Paone, R; Juvonen, T; Biancari, F

    2013-01-01

    The aim of this study was to investigate the impact of transfusion of blood products on intermediate outcome after coronary artery bypass surgery. Complete data on perioperative blood transfusion in patients undergoing coronary artery bypass surgery were available from 2001 patients who were operated at our institution. Transfusion of any blood product (relative risk = 1.678, 95% confidence interval = 1.087-2.590) was an independent predictor of all-cause mortality. The additive effect of each blood product on all-cause mortality (relative risk = 1.401, 95% confidence interval = 1.203-1.630) and cardiac mortality (relative risk = 1.553, 95% confidence interval = 1.273-1.895) was evident when the sum of each blood product was included in the regression models. However, when single blood products were included in the regression model, transfusion of fresh frozen plasma/Octaplas® was the only blood product associated with increased risk of all-cause mortality (relative risk = 1.692, 95% confidence interval = 1.222-2.344) and cardiac mortality (relative risk = 2.125, 95% confidence interval = 1.414-3.194). The effect of blood product transfusion was particularly evident during the first three postoperative months. Since follow-up was truncated at 3 months, transfusion of any blood product was a significant predictor of all-cause mortality (relative risk = 2.998, 95% confidence interval = 1.053-0.537). Analysis of patients who survived or had at least 3 months of potential follow-up showed that transfusion of any blood product was not associated with a significantly increased risk of intermediate all-cause mortality (relative risk = 1.430, 95% confidence interval = 0.880-2.323). Transfusion of any blood product is associated with a significant risk of all-cause and cardiac mortality after coronary artery bypass surgery. Such a risk seems to be limited to the early postoperative period and diminishes later on. Among blood products, perioperative use of fresh frozen plasma or Octaplas seems to be the main determinant of mortality.

  4. Rapid Contour-based Segmentation for 18F-FDG PET Imaging of Lung Tumors by Using ITK-SNAP: Comparison to Expert-based Segmentation.

    PubMed

    Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel

    2018-04-03

    Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.

  5. Amnioinfusion for meconium-stained liquor in labour.

    PubMed

    Hofmeyr, G J

    2000-01-01

    Amnioinfusion aims to prevent or relieve umbilical cord compression during labour by infusing a solution into the uterine cavity. It is also thought to dilute meconium when present in the amniotic fluid and so reduce the risk of meconium aspiration. However it may be that the mechanism of effect is that it corrects oligohydramnios (reduced amniotic fluid), for which thick meconium staining is a marker. The objective of this review was to assess the effects of amnioinfusion for meconium-stained liquor on perinatal outcome. The Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Controlled Trials Register were searched. Randomised trials comparing amnioinfusion with no amnioinfusion for women in labour with moderate or thick meconium-staining of the amniotic fluid. Eligibility and trial quality were assessed by one reviewer. Ten studies, most involving small numbers of participants, were included. Under standard perinatal surveillance, amnioinfusion was associated with a reduction in the following: heavy meconium staining of the liquor (relative risk 0.03, 95% confidence interval 0.01 to 0.15); variable fetal heart rate deceleration (relative risk 0.47, 95% confidence interval 0.24 to 0. 90); and a trend to reduced caesarean section overall (relative risk 0.83, 95% confidence interval 0.69 to 1.00). No perinatal deaths were reported. Under limited perinatal surveillance, amnioinfusion was associated with a reduction in the following: meconium aspiration syndrome (relative risk 0.24, 95% confidence interval 0. 12 to 0.48); neonatal hypoxic ischaemic encephalopathy (relative risk 0.07, 95% confidence interval 0.01 to 0.56) and neonatal ventilation or intensive care unit admission (relative risk 0.56, 95% confidence interval 0.39 to 0.79); there was a trend towards reduced perinatal mortality (relative risk 0.34, 95% confidence interval 0.11 to 1.06). Amnioinfusion is associated with improvements in perinatal outcome, particularly in settings where facilities for perinatal surveillance are limited. The trials reviewed are too small to address the possibility of rare but serious maternal adverse effects of amnioinfusion.

  6. Amnioinfusion for meconium-stained liquor in labour.

    PubMed

    Hofmeyr, G J

    2002-01-01

    Amnioinfusion aims to prevent or relieve umbilical cord compression during labour by infusing a solution into the uterine cavity. It is also thought to dilute meconium when present in the amniotic fluid and so reduce the risk of meconium aspiration. However, it may be that the mechanism of effect is that it corrects oligohydramnios (reduced amniotic fluid), for which thick meconium staining is a marker. The objective of this review was to assess the effects of amnioinfusion for meconium-stained liquor on perinatal outcome. The Cochrane Pregnancy and Childbirth Group trials register (October 2001) and the Cochrane Controlled Trials Register (Issue 3, 2001) were searched. Randomised trials comparing amnioinfusion with no amnioinfusion for women in labour with moderate or thick meconium-staining of the amniotic fluid. Eligibility and trial quality were assessed by one reviewer. Twelve studies, most involving small numbers of participants, were included. Under standard perinatal surveillance, amnioinfusion was associated with a reduction in the following: heavy meconium staining of the liquor (relative risk 0.03, 95% confidence interval 0.01 to 0.15); variable fetal heart rate deceleration (relative risk 0.65, 95% confidence interval 0.49 to 0.88); and reduced caesarean section overall (relative risk 0.82, 95% confidence interval 0.69 to 1.97). No perinatal deaths were reported. Under limited perinatal surveillance, amnioinfusion was associated with a reduction in the following: meconium aspiration syndrome (relative risk 0.24, 95% confidence interval 0.12 to 0.48); neonatal hypoxic ischaemic encephalopathy (relative risk 0.07, 95% confidence interval 0.01 to 0.56) and neonatal ventilation or intensive care unit admission (relative risk 0.56, 95% confidence interval 0.39 to 0.79); there was a trend towards reduced perinatal mortality (relative risk 0.34, 95% confidence interval 0.11 to 1.06). Amnioinfusion is associated with improvements in perinatal outcome, particularly in settings where facilities for perinatal surveillance are limited. The trials reviewed are too small to address the possibility of rare but serious maternal adverse effects of amnioinfusion.

  7. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  8. Association between GFR Estimated by Multiple Methods at Dialysis Commencement and Patient Survival

    PubMed Central

    Wong, Muh Geot; Pollock, Carol A.; Cooper, Bruce A.; Branley, Pauline; Collins, John F.; Craig, Jonathan C.; Kesselhut, Joan; Luxton, Grant; Pilmore, Andrew; Harris, David C.

    2014-01-01

    Summary Background and objectives The Initiating Dialysis Early and Late study showed that planned early or late initiation of dialysis, based on the Cockcroft and Gault estimation of GFR, was associated with identical clinical outcomes. This study examined the association of all-cause mortality with estimated GFR at dialysis commencement, which was determined using multiple formulas. Design, setting, participants, & measurements Initiating Dialysis Early and Late trial participants were stratified into tertiles according to the estimated GFR measured by Cockcroft and Gault, Modification of Diet in Renal Disease, or Chronic Kidney Disease-Epidemiology Collaboration formula at dialysis commencement. Patient survival was determined using multivariable Cox proportional hazards model regression. Results Only Initiating Dialysis Early and Late trial participants who commenced on dialysis were included in this study (n=768). A total of 275 patients died during the study. After adjustment for age, sex, racial origin, body mass index, diabetes, and cardiovascular disease, no significant differences in survival were observed between estimated GFR tertiles determined by Cockcroft and Gault (lowest tertile adjusted hazard ratio, 1.11; 95% confidence interval, 0.82 to 1.49; middle tertile hazard ratio, 1.29; 95% confidence interval, 0.96 to 1.74; highest tertile reference), Modification of Diet in Renal Disease (lowest tertile hazard ratio, 0.88; 95% confidence interval, 0.63 to 1.24; middle tertile hazard ratio, 1.20; 95% confidence interval, 0.90 to 1.61; highest tertile reference), and Chronic Kidney Disease-Epidemiology Collaboration equations (lowest tertile hazard ratio, 0.93; 95% confidence interval, 0.67 to 1.27; middle tertile hazard ratio, 1.15; 95% confidence interval, 0.86 to 1.54; highest tertile reference). Conclusion Estimated GFR at dialysis commencement was not significantly associated with patient survival, regardless of the formula used. However, a clinically important association cannot be excluded, because observed confidence intervals were wide. PMID:24178976

  9. VizieR Online Data Catalog: Fermi/GBM GRB time-resolved spectral catalog (Yu+, 2016)

    NASA Astrophysics Data System (ADS)

    Yu, H.-F.; Preece, R. D.; Greiner, J.; Bhat, P. N.; Bissaldi, E.; Briggs, M. S.; Cleveland, W. H.; Connaughton, V.; Goldstein, A.; von Kienlin; A.; Kouveliotou, C.; Mailyan, B.; Meegan, C. A.; Paciesas, W. S.; Rau, A.; Roberts, O. J.; Veres, P.; Wilson-Hodge, C.; Zhang, B.-B.; van Eerten, H. J.

    2016-01-01

    Time-resolved spectral analysis results of BEST models: for each spectrum GRB name using the Fermi GBM trigger designation, spectrum number within individual burst, start time Tstart and end time Tstop for the time bin, BEST model, best-fit parameters of the BEST model, value of CSTAT per degrees of freedom, 10keV-1MeV photon and energy flux are given. Ep evolutionary trends: for each burst GRB name, number of spectra with Ep, Spearman's Rank Correlation Coefficients between Ep_ and photon flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficients between Ep and energy flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficient between Ep and time and 90%, 95%, and 99% confidence intervals, trends as determined by computer for 90%, 95%, and 99% confidence intervals, trends as determined by human eyes are given. (2 data files).

  10. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  11. Pregnancy and birth outcomes in couples with infertility with and without assisted reproductive technology: with an emphasis on US population-based studies.

    PubMed

    Luke, Barbara

    2017-09-01

    Infertility, defined as the inability to conceive within 1 year of unprotected intercourse, affects an estimated 80 million individuals worldwide, or 10-15% of couples of reproductive age. Assisted reproductive technology includes all infertility treatments to achieve conception; in vitro fertilization is the process by which an oocyte is fertilized by semen outside the body; non-in vitro fertilization assisted reproductive technology treatments include ovulation induction, artificial insemination, and intrauterine insemination. Use of assisted reproductive technology has risen steadily in the United States during the past 2 decades due to several reasons, including childbearing at older maternal ages and increasing insurance coverage. The number of in vitro fertilization cycles in the United States has nearly doubled from 2000 through 2013 and currently 1.7% of all live births in the United States are the result of this technology. Since the birth of the first child from in vitro fertilization >35 years ago, >5 million babies have been born from in vitro fertilization, half within the past 6 years. It is estimated that 1% of singletons, 19% of twins, and 25% of triplet or higher multiples are due to in vitro fertilization, and 4%, 21%, and 52%, respectively, are due to non-in vitro fertilization assisted reproductive technology. Higher plurality at birth results in a >10-fold increase in the risks for prematurity and low birthweight in twins vs singletons (adjusted odds ratio, 11.84; 95% confidence interval, 10.56-13.27 and adjusted odds ratio, 10.68; 95% confidence interval, 9.45-12.08, respectively). The use of donor oocytes is associated with increased risks for pregnancy-induced hypertension (adjusted odds ratio, 1.43; 95% confidence interval, 1.14-1.78) and prematurity (adjusted odds ratio, 1.43; 95% confidence interval, 1.11-1.83). The use of thawed embryos is associated with higher risks for pregnancy-induced hypertension (adjusted odds ratio, 1.30; 95% confidence interval, 1.08-1.57) and large-for-gestation birthweight (adjusted odds ratio, 1.74; 95% confidence interval, 1.45-2.08). Among singletons, in vitro fertilization is associated with increased risk of severe maternal morbidity compared with fertile deliveries (vaginal: adjusted odds ratio, 2.27; 95% confidence interval, 1.78-2.88; cesarean: adjusted odds ratio, 1.67; 95% confidence interval, 1.40-1.98, respectively) and subfertile deliveries (vaginal: adjusted odds ratio, 1.97; 95% confidence interval, 1.30-3.00; cesarean: adjusted odds ratio, 1.75; 95% confidence interval, 1.30-2.35, respectively). Among twins, cesarean in vitro fertilization deliveries have significantly greater severe maternal morbidity compared to cesarean fertile deliveries (adjusted odds ratio, 1.48; 95% confidence interval, 1.14-1.93). Subfertility, with or without in vitro fertilization or non-in vitro fertilization infertility treatments to achieve a pregnancy, is associated with increased risks of adverse maternal and perinatal outcomes. The major risk from in vitro fertilization treatments of multiple births (and the associated excess of perinatal morbidity) has been reduced over time, with fewer and better-quality embryos being transferred. Copyright © 2017. Published by Elsevier Inc.

  12. Family environment, hobbies and habits as psychosocial predictors of survival for surgically treated patients with breast cancer.

    PubMed

    Tominaga, K; Andow, J; Koyama, Y; Numao, S; Kurokawa, E; Ojima, M; Nagai, M

    1998-01-01

    Many psychosocial factors have been reported to influence the duration of survival of breast cancer patients. We have studied how family members, hobbies and habits of the patients may alter their psychosocial status. Female patients with surgically treated breast cancer diagnosed between 1986 and 1995 at the Tochigi Cancer Center Hospital, who provided information on the above-mentioned factors, were used. Their subsequent physical status was followed up in the outpatients clinic. The Cox regression model was used to evaluate the relationship between the results of the factors examined and the duration of the patients' survival, adjusting for the patients' age, stage of disease at diagnosis and curability, as judged by the physician in charge after the treatment. The following factors were revealed to be significant with regard to the survival of surgically treated breast cancer patients: being a widow (hazard ratio 3.29; 95% confidence interval 1.32-8.20), having a hobby (hazard ratio 0.43; 95% confidence interval 0.23-0.82), number of hobbies (hazard ratio 0.64; 95% confidence interval 0.41-1.00), number of female children (hazard ratio 0.64; 95% confidence interval 0.42-0.98), smoker (hazard ratio 2.08; 95% confidence interval 1.02-4.26) and alcohol consumption (hazard ratio 0.10; 95% confidence interval 0.01-0.72). These results suggest that psychosocial factors, including the family environment, where patients receive emotional support from their spouse and children, hobbies and the patients' habits, may influence the duration of survival in surgically treated breast cancer patients.

  13. Taichi exercise for self-rated sleep quality in older people: a systematic review and meta-analysis.

    PubMed

    Du, Shizheng; Dong, Jianshu; Zhang, Heng; Jin, Shengji; Xu, Guihua; Liu, Zengxia; Chen, Lixia; Yin, Haiyan; Sun, Zhiling

    2015-01-01

    Self-reported sleep disorders are common in older adults, resulting in serious consequences. Non-pharmacological measures are important complementary interventions, among which Taichi exercise is a popular alternative. Some experiments have been performed; however, the effect of Taichi exercise in improving sleep quality in older people has yet to be validated by systematic review. Using systematic review and meta-analysis, this study aimed to examine the efficacy of Taichi exercise in promoting self-reported sleep quality in older adults. Systematic review and meta-analysis of randomized controlled studies. 4 English databases: Pubmed, Cochrane Library, Web of Science and CINAHL, and 4 Chinese databases: CBMdisc, CNKI, VIP, and Wanfang database were searched through December 2013. Two reviewers independently selected eligible trials, conducted critical appraisal of the methodological quality by using the quality appraisal criteria for randomized controlled studies recommended by Cochrane Handbook. A standardized data form was used to extract information. Meta-analysis was performed. Five randomized controlled studies met inclusion criteria. All suffered from some methodological flaws. The results of this study showed that Taichi has large beneficial effect on sleep quality in older people, as indicated by decreases in the global Pittsburgh Sleep Quality Index score [standardized mean difference=-0.87, 95% confidence intervals (95% confidence interval) (-1.25, -0.49)], as well as its sub-domains of subjective sleep quality [standardized mean difference=-0.83, 95% confidence interval (-1.08, -0.57)], sleep latency [standardized mean difference=-0.75, 95% confidence interval (-1.42, -0.07)], sleep duration [standardized mean difference=-0.55, 95% confidence interval (-0.90, -0.21)], habitual sleep efficiency [standardized mean difference=-0.49, 95% confidence interval (-0.74, -0.23)], sleep disturbance [standardized mean difference=-0.44, 95% confidence interval (-0.69, -0.19)], and daytime dysfunction [standardized mean difference=-0.34, 95% confidence interval (-0.59, -0.09)]. Daytime sleepiness improvement was also observed. Weak evidence shows that Taichi exercise has a beneficial effect in improving self-rated sleep quality for older adults, suggesting that Taichi could be an effective alternative and complementary approach to existing therapies for older people with sleep problems. More rigorous experimental studies are required. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Previous treatment, sputum-smear nonconversion, and suburban living: The risk factors of multidrug-resistant tuberculosis among Malaysians.

    PubMed

    Mohd Shariff, Noorsuzana; Shah, Shamsul Azhar; Kamaludin, Fadzilah

    2016-03-01

    The number of multidrug-resistant tuberculosis patients is increasing each year in many countries all around the globe. Malaysia has no exception in facing this burdensome health problem. We aimed to investigate the factors that contribute to the occurrence of multidrug-resistant tuberculosis among Malaysian tuberculosis patients. An unmatched case-control study was conducted among tuberculosis patients who received antituberculosis treatments from April 2013 until April 2014. Cases are those diagnosed as pulmonary tuberculosis patients clinically, radiologically, and/or bacteriologically, and who were confirmed to be resistant to both isoniazid and rifampicin through drug-sensitivity testing. On the other hand, pulmonary tuberculosis patients who were sensitive to all first-line antituberculosis drugs and were treated during the same time period served as controls. A total of 150 tuberculosis patients were studied, of which the susceptible cases were 120. Factors found to be significantly associated with the occurrence of multidrug-resistant tuberculosis are being Indian or Chinese (odds ratio 3.17, 95% confidence interval 1.04-9.68; and odds ratio 6.23, 95% confidence interval 2.24-17.35, respectively), unmarried (odds ratio 2.58, 95% confidence interval 1.09-6.09), living in suburban areas (odds ratio 2.58, 95% confidence interval 1.08-6.19), are noncompliant (odds ratio 4.50, 95% confidence interval 1.71-11.82), were treated previously (odds ratio 8.91, 95% confidence interval 3.66-21.67), and showed positive sputum smears at the 2nd (odds ratio 7.00, 95% confidence interval 2.46-19.89) and 6th months of treatment (odds ratio 17.96, 95% confidence interval 3.51-91.99). Living in suburban areas, positive sputum smears in the 2nd month of treatment, and was treated previously are factors that independently contribute to the occurrence of multidrug-resistant tuberculosis. Those with positive smears in the second month of treatment, have a history of previous treatment, and live in suburban areas are found to have a higher probability of becoming multidrug resistant. The results presented here may facilitate improvements in the screening and detection process of drug-resistant patients in Malaysia in the future. Copyright © 2015 Asian-African Society for Mycobacteriology. Published by Elsevier Ltd. All rights reserved.

  15. Stapled versus handsewn methods for colorectal anastomosis surgery.

    PubMed

    Lustosa, S A; Matos, D; Atallah, A N; Castro, A A

    2001-01-01

    Randomized controlled trials comparing stapled with handsewn colorectal anastomosis have not shown either technique to be superior, perhaps because individual studies lacked statistical power. A systematic review, with pooled analysis of results, might provide a more definitive answer. To compare the safety and effectiveness of stapled and handsewn colorectal anastomosis. The following primary hypothesis was tested: the stapled technique is more effective because it decreases the level of complications. The RCT register of the Cochrane Review Group was searched for any trial or reference to a relevant trial (published, in-press, or in progress). All publications were sought through computerised searches of EMBASE, LILACS, MEDLINE, the Cochrane Controlled Clinical Trials Database, and through letters to industrial companies and authors. There were no limits upon language, date, or other criteria. All randomized clinical trials (RCTs) in which stapled and handsewn colorectal anastomosis were compared. Adult patients submitted electively to colorectal anastomosis. Endoluminal circular stapler and handsewn colorectal anastomosis. a) Mortality b) Overall Anastomotic Dehiscence c) Clinical Anastomotic Dehiscence d) Radiological Anastomotic Dehiscence e) Stricture f) Anastomotic Haemorrhage g) Reoperation h) Wound Infection i) Anastomosis Duration j) Hospital Stay. Data were independently extracted by the two reviewers (SASL, DM) and cross-checked. The methodological quality of each trial was assessed by the same two reviewers. Details of the randomization (generation and concealment), blinding, whether an intention-to-treat analysis was done, and the number of patients lost to follow-up were recorded. The results of each RCT were summarised on an intention-to-treat basis in 2 x 2 tables for each outcome. External validity was defined by characteristics of the participants, the interventions and the outcomes. The RCTs were stratified according to the level of colorectal anastomosis. The Risk Difference method (random effects model) and NNT for dichotomous outcomes measures and weighted mean difference for continuous outcomes measures, with the corresponding 95% confidence interval, were presented in this review. Statistical heterogeneity was evaluated by using funnel plot and chi-square testing. Of the 1233 patients enrolled ( in 9 trials), 622 were treated with stapled, and 611 with manual, suture. The following main results were obtained: a) Mortality: result based on 901 patients; Risk Difference - 0.6% Confidence Interval -2.8% to +1.6%. b) Overall Dehiscence: result based on 1233 patients; Risk Difference 0.2%, 95% Confidence Interval -5.0% to +5.3%. c) Clinical Anastomotic Dehiscence : result based on 1233 patients; Risk Difference -1.4%, 95% Confidence Interval -5.2 to +2.3%. d) Radiological Anastomotic Dehiscence : result based on 825 patients; Risk Difference 1.2%, 95% Confidence Interval -4.8% to +7.3%. e) Stricture: result based on 1042 patients; Risk Difference 4.6%, 95% Confidence Interval 1.2% to 8.1%. Number needed to treat 17, 95% confidence interval 12 to 31. f) Anastomotic Hemorrhage: result based on 662 patients; Risk Difference 2.7%, 95% Confidence Interval - 0.1% to +5.5%. g) Reoperation: result based on 544 patients; Risk Difference 3.9%, 95% Confidence Interval 0.3% to 7.4%. h) Wound Infection: result based on 567 patients; Risk Difference 1.0%, 95% Confidence Interval -2.2% to +4.3%. i) Anastomosis duration: result based on one study (159 patients); Weighted Mean Difference -7.6 minutes, 95% Confidence Interval -12.9 to -2.2 minutes. j) Hospital Stay: result based on one study (159 patients), Weighted Mean Difference 2.0 days, 95% Confidence Interval -3.27 to +7.2 days. The evidence found was insufficient to demonstrate any superiority of stapled over handsewn techniques in colorectal anastomosis, regardless of the level of anastomosis.

  16. [Costs of chronic obstructive pulmonary disease in patients treated in ambulatory care in Poland].

    PubMed

    Jahnz-Różyk, Karina; Targowski, Tomasz; From, Sławomir; Faluta, Tomasz; Borowiec, Lukasz

    2011-01-01

    Chronic obstructive pulmonary disease (COPD) is a leading cause of death worldwide. A cost-of-illness study aims to determine the total economic impact of a disease or health condition on society through the identification, measurement, and valuation of all direct and indirect costs. Exacerbations are believed to be a major cost driver in COPD. The aim of this study was to examine direct, mean costs of COPD in Poland under usual clinical practice form societal perspective. It was an observational bottom-up-cost-of-illness study, based on a retrospective sample of patients presenting with COPD in pulmonary ambulatory care facilities in Poland. Total medical resources consumption of a sample were collected in 2007/2008 year by physician - lung specialists. Direct costs of COPD were evaluated based on data from different populations of five clinical hospitals and eight out-patient clinics. Resources utilisation and cost data are summarised as mean values per patient per year. 95% confidence intervals were derived using percentile bootstrapping. Total medicals resources consumption of a COPD patient per year was 1007 EURO (EUR 1 = PLN 4.0; year 2008). Among this cost 606 EURO was directly related to COPD follow up, 105 EURO was related to ambulatory exacerbation, and 296 EURO was related to exacerbation treated in hospital. The burden of COPD itself appeared to be considerable magnitude for society in Poland.

  17. What is the most reliable solid culture medium for tuberculosis treatment trials?

    PubMed

    Joloba, Moses L; Johnson, John L; Feng, Pei-Jean I; Bozeman, Lorna; Goldberg, Stefan V; Morgan, Karen; Gitta, Phineas; Boom, Henry W; Heilig, Charles M; Mayanja-Kizza, Harriet; Eisenach, Kathleen D

    2014-05-01

    We conducted a prospective study to determine which solid medium is the most reliable overall and after two months of therapy to detect Mycobacterium tuberculosis complex (MTB). MTB isolation and contamination rates on LJ and Middlebrook 7H10 and 7H11 agar with and without selective antibiotics were examined in a single laboratory and compared against a constructed reference standard and MGIT 960 results. Of 50 smear positive adults with pulmonary TB enrolled, 45 successfully completed standard treatment. Two spot sputum specimens were collected before treatment and at week 8 and one spot specimen each at weeks 2, 4, 6, and 12. The MTB recovery rate among all solid media for pre-treatment specimens was similar. After 8 weeks, selective (S) 7H11 had the highest positivity rate. Latent class analysis was used to construct the primary reference standard. The 98.7% sensitivity of 7H11S (95% Wilson confidence interval 96.4%-99.6%) was highest among the 5 solid media (P = 0.003 by bootstrap); the 82.6% specificity of 7H10S (95% CI 75.7%-87.8%) was highest (P = 0.098). Our results support 7H11S as the medium of choice. Further studies in different areas where recovery and contamination are likely to vary, are recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Genetic covariance between central corneal thickness and anterior chamber volume: a Hungarian twin study.

    PubMed

    Toth, Georgina Zsofia; Racz, Adel; Tarnoki, Adam Domonkos; Tarnoki, David Laszlo; Szekelyhidi, Zita; Littvay, Levente; Suveges, Ildiko; Nemeth, Janos; Nagy, Zoltan Zsolt

    2014-10-01

    Few, and inconsistent, studies have showed high heritability of some parameters of the anterior segment of the eye; however, no heritability of anterior chamber volume (ACV) has been reported, and no study has been performed to investigate the correlation between the ACV and central corneal thickness (CCT). Anterior segment measurements (Pentacam, Oculus) were obtained from 220 eyes of 110 adult Hungarian twins (41 monozygotic and 14 same-sex dizygotic pairs; 80% women; age 48.6 ± 15.5 years) obtained from the Hungarian Twin Registry. Age- and sex-adjusted heritability of ACV was 85% (bootstrapped 95% confidence interval; CI: 69% to 93%), and 88% for CCT (CI: 79% to 95%). Common environmental effects had no influence, and unshared environmental factors were responsible for 12% and 15% of the variance, respectively. The correlation between ACV and CCT was negative and significant (r ph = -0.35, p < .05), and genetic factors accounted for the covariance significantly (0.934; CI: 0.418, 1.061) based on the bivariate Cholesky decomposition model. These findings support the high heritability of ACV and central corneal thickness, and a strong genetic covariance between them, which underscores the importance of identification of the specific genetic factors and the family risk-based screening of disorders related to these variables, such as open-angle and also angle closure glaucoma and corneal endothelial alterations.

  19. A new proportion measure of the treatment effect captured by candidate surrogate endpoints.

    PubMed

    Kobayashi, Fumiaki; Kuroki, Manabu

    2014-08-30

    The use of surrogate endpoints is expected to play an important role in the development of new drugs, as they can be used to reduce the sample size and/or duration of randomized clinical trials. Biostatistical researchers and practitioners have proposed various surrogacy measures; however, (i) most of these surrogacy measures often fall outside the range [0,1] without any assumptions, (ii) these surrogacy measures do not provide a cut-off value for judging a surrogacy level of candidate surrogate endpoints, and (iii) most surrogacy measures are highly variable; thus, the confidence intervals are often unacceptably wide. In order to solve problems (i) and (ii), we propose a new surrogacy measure, a proportion of the treatment effect captured by candidate surrogate endpoints (PCS), on the basis of the decomposition of the treatment effect into parts captured and non-captured by the candidate surrogate endpoints. In order to solve problem (iii), we propose an estimation method based on the half-range mode method with the bootstrap distribution of the estimated surrogacy measures. Finally, through numerical experiments and two empirical examples, we show that the PCS with the proposed estimation method overcomes these difficulties. The results of this paper contribute to the reliable evaluation of how much of the treatment effect is captured by candidate surrogate endpoints. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Diagnosis of Chronic Pancreatitis Incorporating Endosonographic Features, Demographics, and Behavioral Risk.

    PubMed

    Lee, Linda S; Tabak, Ying P; Kadiyala, Vivek; Sun, Xiaowu; Suleiman, Shadeah; Johannes, Richard S; Banks, Peter A; Conwell, Darwin L

    2017-03-01

    Diagnosing chronic pancreatitis remains challenging. Endoscopic ultrasound (EUS) is utilized to evaluate pancreatic disease. Abnormal pancreas function test is considered the "nonhistologic" criterion standard for chronic pancreatitis. We derived a prediction model for abnormal endoscopic pancreatic function test (ePFT) by enriching EUS findings with patient demographic and pancreatitis behavioral risk characteristics. Demographics, behavioral risk characteristics, EUS findings, and peak bicarbonate results were collected from patients evaluated for pancreatic disease. Abnormal ePFT was defined as peak bicarbonate of less than 75 mEq/L. We fit a logistic regression model and converted it to a risk score system. The risk score was validated using 1000 bootstrap simulations. A total of 176 patients were included; 61% were female with median age of 48 years (interquartile range, 38-57 years). Abnormal ePFT rate was 39.2% (69/176). Four variables formulated the risk score: alcohol or smoking status, number of parenchymal abnormalities, number of ductal abnormalities, and calcifications. Abnormal ePFT occurred in 10.7% with scores 4 or less versus 92.0% scoring 20 or greater. The model C-statistic was 0.78 (95% confidence interval, 0.71-0.85). Number of EUS pancreatic duct and parenchymal abnormalities, presence of calcification, and smoking/alcohol status were predictive of abnormal ePFT. This simple model has good discrimination for ePFT results.

  1. Mediating processes between stress and problematic marijuana use.

    PubMed

    Ketcherside, Ariel; Filbey, Francesca M

    2015-06-01

    The literature widely reports that stress is associated with marijuana use, yet, to date, the path from stress to marijuana-related problems has not been tested. In this study, we evaluated whether negative affect mediates the relationship between stress and marijuana use. To that end, we tested models to determine mediators between problems with marijuana use (via Marijuana Problem Scale), stress (via Early Life Stress Questionnaire, Perceived Stress Scale), and negative affect (via Beck Depression Inventory; Beck Anxiety Inventory) in 157 current heavy marijuana users. Mediation tests and bootstrap confidence intervals were carried out via the "Mediation" package in R. Depression and anxiety scores both significantly mediated the relationship between perceived stress and problematic marijuana use. Only depression significantly mediated the relationship between early life stress and problematic marijuana use. Early life stress, perceived stress and problematic marijuana use were significant only as independent variables and dependent variables. These findings demonstrate that (1) depression mediated both early life stress and perceived stress, and problematic marijuana use, and, (2) anxiety mediated perceived stress and problematic marijuana use. This mediation analysis represents a strong first step toward understanding the relationship between these variables; however, longitudinal studies are needed to determine causality between these variables. To conclude, addressing concomitant depression and anxiety in those who report either perceived stress or early life stress is important for the prevention of cannabis use disorders. Copyright © 2015. Published by Elsevier Ltd.

  2. Hitting Is Contagious in Baseball: Evidence from Long Hitting Streaks

    PubMed Central

    Bock, Joel R.; Maewal, Akhilesh; Gough, David A.

    2012-01-01

    Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507

  3. Assessing the effects of pharmacological agents on respiratory dynamics using time-series modeling.

    PubMed

    Wong, Kin Foon Kevin; Gong, Jen J; Cotten, Joseph F; Solt, Ken; Brown, Emery N

    2013-04-01

    Developing quantitative descriptions of how stimulant and depressant drugs affect the respiratory system is an important focus in medical research. Respiratory variables-respiratory rate, tidal volume, and end tidal carbon dioxide-have prominent temporal dynamics that make it inappropriate to use standard hypothesis-testing methods that assume independent observations to assess the effects of these pharmacological agents. We present a polynomial signal plus autoregressive noise model for analysis of continuously recorded respiratory variables. We use a cyclic descent algorithm to maximize the conditional log likelihood of the parameters and the corrected Akaike's information criterion to choose simultaneously the orders of the polynomial and the autoregressive models. In an analysis of respiratory rates recorded from anesthetized rats before and after administration of the respiratory stimulant methylphenidate, we use the model to construct within-animal z-tests of the drug effect that take account of the time-varying nature of the mean respiratory rate and the serial dependence in rate measurements. We correct for the effect of model lack-of-fit on our inferences by also computing bootstrap confidence intervals for the average difference in respiratory rate pre- and postmethylphenidate treatment. Our time-series modeling quantifies within each animal the substantial increase in mean respiratory rate and respiratory dynamics following methylphenidate administration. This paradigm can be readily adapted to analyze the dynamics of other respiratory variables before and after pharmacologic treatments.

  4. Studies of the flow and turbulence fields in a turbulent pulsed jet flame using LES/PDF

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Masri, Assaad R.; Wang, Haifeng

    2017-09-01

    A turbulent piloted jet flame subject to a rapid velocity pulse in its fuel jet inflow is proposed as a new benchmark case for the study of turbulent combustion models. In this work, we perform modelling studies of this turbulent pulsed jet flame and focus on the predictions of its flow and turbulence fields. An advanced modelling strategy combining the large eddy simulation (LES) and the probability density function (PDF) methods is employed to model the turbulent pulsed jet flame. Characteristics of the velocity measurements are analysed to produce a time-dependent inflow condition that can be fed into the simulations. The effect of the uncertainty in the inflow turbulence intensity is investigated and is found to be very small. A method of specifying the inflow turbulence boundary condition for the simulations of the pulsed jet flame is assessed. The strategies for validating LES of statistically transient flames are discussed, and a new framework is developed consisting of different averaging strategies and a bootstrap method for constructing confidence intervals. Parametric studies are performed to examine the sensitivity of the predictions of the flow and turbulence fields to model and numerical parameters. A direct comparison of the predicted and measured time series of the axial velocity demonstrates a satisfactory prediction of the flow and turbulence fields of the pulsed jet flame by the employed modelling methods.

  5. Estimating interaction on an additive scale between continuous determinants in a logistic regression model.

    PubMed

    Knol, Mirjam J; van der Tweel, Ingeborg; Grobbee, Diederick E; Numans, Mattijs E; Geerlings, Mirjam I

    2007-10-01

    To determine the presence of interaction in epidemiologic research, typically a product term is added to the regression model. In linear regression, the regression coefficient of the product term reflects interaction as departure from additivity. However, in logistic regression it refers to interaction as departure from multiplicativity. Rothman has argued that interaction estimated as departure from additivity better reflects biologic interaction. So far, literature on estimating interaction on an additive scale using logistic regression only focused on dichotomous determinants. The objective of the present study was to provide the methods to estimate interaction between continuous determinants and to illustrate these methods with a clinical example. and results From the existing literature we derived the formulas to quantify interaction as departure from additivity between one continuous and one dichotomous determinant and between two continuous determinants using logistic regression. Bootstrapping was used to calculate the corresponding confidence intervals. To illustrate the theory with an empirical example, data from the Utrecht Health Project were used, with age and body mass index as risk factors for elevated diastolic blood pressure. The methods and formulas presented in this article are intended to assist epidemiologists to calculate interaction on an additive scale between two variables on a certain outcome. The proposed methods are included in a spreadsheet which is freely available at: http://www.juliuscenter.nl/additive-interaction.xls.

  6. Mediation effects of a culturally generic substance use prevention program for Asian American adolescents

    PubMed Central

    Fang, Lin; Schinke, Steven P.

    2014-01-01

    In this paper, we examined the mediation effects of a family-based substance use prevention program on a sample of Asian American families. These families were randomized into an intervention arm or a non-intervention control arm. Using path models, we assessed the effect of the intervention on adolescent girls’ substance use outcomes at 2-year follow-up through family relationships and adolescent self-efficacy pathways. Bias-corrected bootstrapping strategy was employed to assess the significance of the mediation effect by evaluating the 95% confidence interval of the standardized coefficient. The results show that receiving the intervention exerted a positive effect on girls’ family relationships at 1-year follow-up. Such an improvement was associated with girls’ increased self-efficacy, which in turn led to girls’ decreased alcohol use, marijuana use, and future intention to use substances at 2-year follow-up. Considering the diverse cultural backgrounds, as well as languages, nationalities, and acculturation levels under the umbrella term “Asian Americans”, we demonstrate that a universal web-based intervention that tackles the theoretical- and empirical-based risk and protective factors can be effective for Asian Americans. Despite its generic nature, our program may provide relevant tools for Asian American parents in assisting their adolescent children to navigate through the developmental stage and ultimately, resist substance use. PMID:25505939

  7. Mediation analysis of gestational age, congenital heart defects, and infant birth-weight.

    PubMed

    Wogu, Adane F; Loffredo, Christopher A; Bebu, Ionut; Luta, George

    2014-12-17

    In this study we assessed the mediation role of the gestational age on the effect of the infant's congenital heart defects (CHD) on birth-weight. We used secondary data from the Baltimore-Washington Infant Study (1981-1989). Mediation analysis was employed to investigate whether gestational age acted as a mediator of the association between CHD and reduced birth-weight. We estimated the mediated effect, the mediation proportion, and their corresponding 95% confidence intervals (CI) using several methods. There were 3362 CHD cases and 3564 controls in the dataset with mean birth-weight of 3071 (SD = 729) and 3353 (SD = 603) grams, respectively; the mean gestational age was 38.9 (SD = 2.7) and 39.6 (SD = 2.2) weeks, respectively. After adjusting for covariates, the estimated mediated effect by gestational age was 113.5 grams (95% CI, 92.4-134.2) and the mediation proportion was 40.7% (95% CI, 34.7%-46.6%), using the bootstrap approach. Gestational age may account for about 41% of the overall effect of heart defects on reduced infant birth-weight. Improved prenatal care and other public health efforts that promote full term delivery, particularly targeting high-risk families and mothers known to be carrying a fetus with CHD, may therefore be expected to improve the birth-weight of these infants and their long term health.

  8. Mental Health Service Use Among Lesbian, Gay, and Bisexual Older Adults.

    PubMed

    Stanley, Ian H; Duong, Jeffrey

    2015-07-01

    Empirical efforts to measure use of mental health services among lesbian, gay, and bisexual (LGB) older adults have been notably lacking. Thus this study assessed associations between sexual orientation and mental health service use among older adults and determined the mediating role of nonspecific psychological distress, excessive alcohol use, and self-perceived poor general medical health. Data from the 2011 New York City Community Health Survey were analyzed. The analytic sample comprised 5,138 adults ages 50 and over. Logistic regression modeling was used to examine associations between sexual orientation (LGB versus heterosexual) and past-year mental health service use (counseling or medication), adjusting for sociodemographic and clinical characteristics. Mediation analyses using bootstrapping were conducted. Among LGB older adults, 23.9% reported receiving counseling, and 23.4% reported taking psychiatric medication in the past year. LGB respondents were significantly more likely than heterosexuals to have received counseling (adjusted odds ratio [AOR]=2.16, 95% confidence interval [CI]=1.49-3.13) and psychiatric medication (AOR=1.97, CI=1.36-2.86). Psychological distress, excessive alcohol use, and self-perceived poor general medical health did not mediate the association between sexual orientation and mental health service use. LGB older adults were more likely than heterosexuals to utilize mental health services, and this association was not explained by indicators of general medical, mental, or behavioral health.

  9. Projection of young-old and old-old with functional disability: does accounting for the changing educational composition of the elderly population make a difference?

    PubMed

    Ansah, John P; Malhotra, Rahul; Lew, Nicola; Chiu, Chi-Tsun; Chan, Angelique; Bayer, Steffen; Matchar, David B

    2015-01-01

    This study compares projections, up to year 2040, of young-old (aged 60-79) and old-old (aged 80+) with functional disability in Singapore with and without accounting for the changing educational composition of the Singaporean elderly. Two multi-state population models, with and without accounting for educational composition respectively, were developed, parameterized with age-gender-(education)-specific transition probabilities (between active, functional disability and death states) estimated from two waves (2009 and 2011) of a nationally representative survey of community-dwelling Singaporeans aged ≥ 60 years (N=4,990). Probabilistic sensitivity analysis with the bootstrap method was used to obtain the 95% confidence interval of the transition probabilities. Not accounting for educational composition overestimated the young-old with functional disability by 65 percent and underestimated the old-old by 20 percent in 2040. Accounting for educational composition, the proportion of old-old with functional disability increased from 40.8 percent in 2000 to 64.4 percent by 2040; not accounting for educational composition, the proportion in 2040 was 49.4 percent. Since the health profiles, and hence care needs, of the old-old differ from those of the young-old, health care service utilization and expenditure and the demand for formal and informal caregiving will be affected, impacting health and long-term care policy.

  10. Projection of Young-Old and Old-Old with Functional Disability: Does Accounting for the Changing Educational Composition of the Elderly Population Make a Difference?

    PubMed Central

    Ansah, John P.; Malhotra, Rahul; Lew, Nicola; Chiu, Chi-Tsun; Chan, Angelique; Bayer, Steffen; Matchar, David B.

    2015-01-01

    This study compares projections, up to year 2040, of young-old (aged 60-79) and old-old (aged 80+) with functional disability in Singapore with and without accounting for the changing educational composition of the Singaporean elderly. Two multi-state population models, with and without accounting for educational composition respectively, were developed, parameterized with age-gender-(education)-specific transition probabilities (between active, functional disability and death states) estimated from two waves (2009 and 2011) of a nationally representative survey of community-dwelling Singaporeans aged ≥60 years (N=4,990). Probabilistic sensitivity analysis with the bootstrap method was used to obtain the 95% confidence interval of the transition probabilities. Not accounting for educational composition overestimated the young-old with functional disability by 65 percent and underestimated the old-old by 20 percent in 2040. Accounting for educational composition, the proportion of old-old with functional disability increased from 40.8 percent in 2000 to 64.4 percent by 2040; not accounting for educational composition, the proportion in 2040 was 49.4 percent. Since the health profiles, and hence care needs, of the old-old differ from those of the young-old, health care service utilization and expenditure and the demand for formal and informal caregiving will be affected, impacting health and long-term care policy. PMID:25974069

  11. Etiological classifications of transient ischemic attacks: subtype classification by TOAST, CCS and ASCO--a pilot study.

    PubMed

    Amort, Margareth; Fluri, Felix; Weisskopf, Florian; Gensicke, Henrik; Bonati, Leo H; Lyrer, Philippe A; Engelter, Stefan T

    2012-01-01

    In patients with transient ischemic attacks (TIA), etiological classification systems are not well studied. The Trial of ORG 10172 in Acute Stroke Treatment (TOAST), the Causative Classification System (CCS), and the Atherosclerosis Small Vessel Disease Cardiac Source Other Cause (ASCO) classification may be useful to determine the underlying etiology. We aimed at testing the feasibility of each of the 3 systems. Furthermore, we studied and compared their prognostic usefulness. In a single-center TIA registry prospectively ascertained over 2 years, we applied 3 etiological classification systems. We compared the distribution of underlying etiologies, the rates of patients with determined versus undetermined etiology, and studied whether etiological subtyping distinguished TIA patients with versus without subsequent stroke or TIA within 3 months. The 3 systems were applicable in all 248 patients. A determined etiology with the highest level of causality was assigned similarly often with TOAST (35.9%), CCS (34.3%), and ASCO (38.7%). However, the frequency of undetermined causes differed significantly between the classification systems and was lowest for ASCO (TOAST: 46.4%; CCS: 37.5%; ASCO: 18.5%; p < 0.001). In TOAST, CCS, and ASCO, cardioembolism (19.4/14.5/18.5%) was the most common etiology, followed by atherosclerosis (11.7/12.9/14.5%). At 3 months, 33 patients (13.3%, 95% confidence interval 9.3-18.2%) had recurrent cerebral ischemic events. These were strokes in 13 patients (5.2%; 95% confidence interval 2.8-8.8%) and TIAs in 20 patients (8.1%, 95% confidence interval 5.0-12.2%). Patients with a determined etiology (high level of causality) had higher rates of subsequent strokes than those without a determined etiology [TOAST: 6.7% (95% confidence interval 2.5-14.1%) vs. 4.4% (95% confidence interval 1.8-8.9%); CSS: 9.3% (95% confidence interval 4.1-17.5%) vs. 3.1% (95% confidence interval 1.0-7.1%); ASCO: 9.4% (95% confidence interval 4.4-17.1%) vs. 2.6% (95% confidence interval 0.7-6.6%)]. However, this difference was only significant in the ASCO classification (p = 0.036). Using ASCO, there was neither an increase in risk of subsequent stroke among patients with incomplete diagnostic workup (at least one subtype scored 9) compared with patients with adequate workup (no subtype scored 9), nor among patients with multiple causes compared with patients with a single cause. In TIA patients, all etiological classification systems provided a similar distribution of underlying etiologies. The increase in stroke risk in TIA patients with determined versus undetermined etiology was most evident using the ASCO classification. Copyright © 2012 S. Karger AG, Basel.

  12. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  13. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification.

    PubMed

    Jiang, Wenyu; Simon, Richard

    2007-12-20

    This paper first provides a critical review on some existing methods for estimating the prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimens. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We introduce a repeated leave-one-out bootstrap (RLOOB) method that predicts for each specimen in the sample using bootstrap learning sets of size ln. We then propose an adjusted bootstrap (ABS) method that fits a learning curve to the RLOOB estimates calculated with different bootstrap learning set sizes. The ABS method is robust across the situations we investigate and provides a slightly conservative estimate for the prediction error. Even with small samples, it does not suffer from large upward bias as the leave-one-out bootstrap and the 0.632+ bootstrap, and it does not suffer from large variability as the leave-one-out cross-validation in microarray applications. Copyright (c) 2007 John Wiley & Sons, Ltd.

  14. Fast, Exact Bootstrap Principal Component Analysis for p > 1 million

    PubMed Central

    Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim

    2015-01-01

    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801

  15. Comparison of human expert and computer-automated systems using magnitude-squared coherence (MSC) and bootstrap distribution statistics for the interpretation of pattern electroretinograms (PERGs) in infants with optic nerve hypoplasia (ONH).

    PubMed

    Fisher, Anthony C; McCulloch, Daphne L; Borchert, Mark S; Garcia-Filion, Pamela; Fink, Cassandra; Eleuteri, Antonio; Simpson, David M

    2015-08-01

    Pattern electroretinograms (PERGs) have inherently low signal-to-noise ratios and can be difficult to detect when degraded by pathology or noise. We compare an objective system for automated PERG analysis with expert human interpretation in children with optic nerve hypoplasia (ONH) with PERGs ranging from clear to undetectable. PERGs were recorded uniocularly with chloral hydrate sedation in children with ONH (aged 3.5-35 months). Stimuli were reversing checks of four sizes focused using an optical system incorporating the cycloplegic refraction. Forty PERG records were analysed; 20 selected at random and 20 from eyes with good vision (fellow eyes or eyes with mild ONH) from over 300 records. Two experts identified P50 and N95 of the PERGs after manually deleting trials with movement artefact, slow-wave EEG (4-8 Hz) or other noise from raw data for 150 check reversals. The automated system first identified present/not-present responses using a magnitude-squared coherence criterion and then, for responses confirmed as present, estimated the P50 and N95 cardinal positions as the turning points in local third-order polynomials fitted in the -3 dB bandwidth [0.25 … 45] Hz. Confidence limits were estimated from bootstrap re-sampling with replacement. The automated system uses an interactive Internet-available webpage tool (see http://clinengnhs.liv.ac.uk/esp_perg_1.htm). The automated system detected 28 PERG signals above the noise level (p ≤ 0.05 for H0). Good subjective quality ratings were indicative of significant PERGs; however, poor subjective quality did not necessarily predict non-significant signals. P50 and N95 implicit times showed good agreement between the two experts and between experts and the automated system. For the N95 amplitude measured to P50, the experts differed by an average of 13% consistent with differing interpretations of peaks within noise, while the automated amplitude measure was highly correlated with the expert measures but was proportionally larger. Trial-by-trial review of these data required approximately 6.5 h for each human expert, while automated data processing required <4 min, excluding overheads relating to data transfer. An automated computer system for PERG analysis, using a panel of signal processing and statistical techniques, provides objective present/not-present detection and cursor positioning with explicit confidence intervals. The system achieves, within an efficient and robust statistical framework, estimates of P50 and N95 amplitudes and implicit times similar to those of clinical experts.

  16. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  17. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  18. Fasting glucose levels, incident diabetes, subclinical atherosclerosis and cardiovascular events in apparently healthy adults: A 12-year longitudinal study.

    PubMed

    Sitnik, Debora; Santos, Itamar S; Goulart, Alessandra C; Staniak, Henrique L; Manson, JoAnn E; Lotufo, Paulo A; Bensenor, Isabela M

    2016-11-01

    We aimed to study the association between fasting plasma glucose, diabetes incidence and cardiovascular burden after 10-12 years. We evaluated diabetes and cardiovascular events incidences, carotid intima-media thickness and coronary artery calcium scores in ELSA-Brasil (the Brazilian Longitudinal Study of Adult Health) baseline (2008-2010) of 1536 adults without diabetes in 1998. We used regression models to estimate association with carotid intima-media thickness (in mm), coronary artery calcium scores (in Agatston points) and cardiovascular events according to fasting plasma glucose in 1998. Adjusted diabetes incidence rate was 9.8/1000 person-years (95% confidence interval: 7.7-13.6/1000 person-years). Incident diabetes was positively associated with higher fasting plasma glucose. Fasting plasma glucose levels 110-125 mg/dL were associated with higher carotid intima-media thickness (β = 0.028; 95% confidence interval: 0.003-0.053). Excluding those with incident diabetes, there was a borderline association between higher carotid intima-media thickness and fasting plasma glucose 110-125 mg/dL (β = 0.030; 95% confidence interval: -0.005 to 0.065). Incident diabetes was associated with higher carotid intima-media thickness (β = 0.034; 95% confidence interval: 0.015-0.053), coronary artery calcium scores ⩾400 (odds ratio = 2.84; 95% confidence interval: 1.17-6.91) and the combined outcome of a coronary artery calcium scores ⩾400 or incident cardiovascular event (odds ratio = 3.50; 95% confidence interval: 1.60-7.65). In conclusion, fasting plasma glucose in 1998 and incident diabetes were associated with higher cardiovascular burden. © The Author(s) 2016.

  19. Prevalence of infections among residents of Residential Care Homes for the Elderly in Hong Kong.

    PubMed

    Choy, C Sm; Chen, H; Yau, C Sw; Hsu, E K; Chik, N Y; Wong, A Ty

    2016-08-01

    A point prevalence study was conducted to study the epidemiology of common infections among residents in Residential Care Homes for the Elderly in Hong Kong and their associated factors. Residential Care Homes for the Elderly in Hong Kong were selected by stratified single-stage cluster random sampling. All residents aged 65 years or above from the recruited homes were surveyed. Infections were identified using standardised definitions. Demographic and health information-including medical history, immunisation record, antibiotic use, and activities of daily living (as measured by Barthel Index)-was collected by a survey team to determine any associated factors. Data were collected from 3857 residents in 46 Residential Care Homes for the Elderly from February to May 2014. A total of 105 residents had at least one type of infection based on the survey definition. The overall prevalence of all infections was 2.7% (95% confidence interval, 2.2%-3.4%). The three most common infections were of the respiratory tract (1.3%; 95% confidence interval, 0.9%-1.9%), skin and soft tissue (0.7%; 95% confidence interval, 0.5%-1.0%), and urinary tract (0.5%; 95% confidence interval, 0.3%-0.9%). Total dependence in activities of daily living, as indicated by low Barthel Index score of 0 to 20 (odds ratio=3.0; 95% confidence interval, 1.4-6.2), and presence of a wound or stoma (odds ratio=2.7; 95% confidence interval, 1.4-4.9) were significantly associated with presence of infection. This survey provides information about infections among residents in Residential Care Homes for the Elderly in the territory. Local data enable us to understand the burden of infections and formulate targeted measures for prevention.

  20. Influence of Objective Three-Dimensional Measures and Movement Images on Surgeon Treatment Planning for Lip Revision Surgery

    PubMed Central

    Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.

    2013-01-01

    Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676

  1. Lower hospital mortality and complications after pediatric hematopoietic stem cell transplantation.

    PubMed

    Bratton, Susan L; Van Duker, Heather; Statler, Kimberly D; Pulsipher, Michael A; McArthur, Jennifer; Keenan, Heather T

    2008-03-01

    To assess protective and risk factors for mortality among pediatric patients during initial care after hematopoietic stem cell transplantation (HSCT) and to evaluate changes in hospital mortality. Retrospective cohort using the 1997, 2000, and 2003 Kids Inpatient Database, a probabilistic sample of children hospitalized in the United States with a procedure code for HSCT. Hospitalized patients in the United States submitted to the database. Age, <19 yrs. None. Hospital mortality significantly decreased from 12% in 1997 to 6% in 2003. Source of stem cells changed with increased use of cord blood. Rates of sepsis, graft versus host disease, and mechanical ventilation significantly decreased. Compared with autologous HSCT, patients who received an allogenic HSCT without T-cell depletion were more likely to die (adjusted odds ratio, 2.4; 95% confidence interval, 1.5, 3.9), while children who received cord blood HSCT were at the greatest risk of hospital death (adjusted odds ratio, 4.8; 95% confidence interval, 2.6, 9.1). Mechanical ventilation (adjusted odds ratio, 26.32; 95% confidence interval, 16.3-42.2), dialysis (adjusted odds ratio, 12.9; 95% confidence interval, 4.7-35.4), and sepsis (adjusted odds ratio, 3.9; 95% confidence interval, 2.5-6.1) were all independently associated with death, while care in 2003 was associated with decreased risk (adjusted odds ratio, 0.4; 95% confidence interval, 0.2-0.7) of death. Hospital mortality after HSCT in children decreased over time as did complications including need for mechanical ventilation, graft versus host disease, and sepsis. Prevention of complications is essential as the need for invasive support continues to be associated with high mortality risk.

  2. High prevalence of refractive errors in a rural population: 'Nooravaran Salamat' Mobile Eye Clinic experience.

    PubMed

    Hashemi, Hassan; Rezvan, Farhad; Ostadimoghaddam, Hadi; Abdollahi, Majid; Hashemi, Maryam; Khabazkhoob, Mehdi

    2013-01-01

    The prevalence of myopia and hyperopia and determinants were determined in a rural population of Iran. Population-based cross-sectional study. Using random cluster sampling, 13 of the 83 villages of Khaf County in the north east of Iran were selected. Data from 2001 people over the age of 15 years were analysed. Visual acuity measurement, non-cycloplegic refraction and eye examinations were done at the Mobile Eye Clinic. The prevalence of myopia and hyperopia based on spherical equivalent worse than -0.5 dioptre and +0.5 dioptre, respectively. The prevalence of myopia, hyperopia and anisometropia in the total study sample was 28% (95% confidence interval: 25.9-30.2), 19.2% (95% confidence interval: 17.3-21.1), and 11.5% (95% confidence interval: 10.0-13.1), respectively. In the over 40 population, the prevalence of myopia and hyperopia was 32.5% (95% confidence interval: 28.9-36.1) and 27.9% (95% confidence interval: 24.5-31.3), respectively. In the multiple regression model for this group, myopia strongly correlated with cataract (odds ratio = 1.98 and 95% confidence interval: 1.33-2.93), and hyperopia only correlated with age (P < 0.001). The prevalence of high myopia and high hyperopia was 1.5% and 4.6%. In the multiple regression model, anisometropia significantly correlated with age (odds ratio = 1.04) and cataract (odds ratio = 5.2) (P < 0.001). The prevalence of myopia and anisometropia was higher than that in previous studies in urban population of Iran, especially in the elderly. Cataract was the only variable that correlated with myopia and anisometropia. © 2013 The Authors. Clinical and Experimental Ophthalmology © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  3. The Association Between Maternal Age and Cerebral Palsy Risk Factors.

    PubMed

    Schneider, Rilla E; Ng, Pamela; Zhang, Xun; Andersen, John; Buckley, David; Fehlings, Darcy; Kirton, Adam; Wood, Ellen; van Rensburg, Esias; Shevell, Michael I; Oskoui, Maryam

    2018-05-01

    Advanced maternal age is associated with higher frequencies of antenatal and perinatal conditions, as well as a higher risk of cerebral palsy in offspring. We explore the association between maternal age and specific cerebral palsy risk factors. Data were extracted from the Canadian Cerebral Palsy Registry. Maternal age was categorized as ≥35 years of age and less than 20 years of age at the time of birth. Chi-square and multivariate logistic regressions were performed to calculate odds ratios and their 95% confidence intervals. The final sample consisted of 1391 children with cerebral palsy, with 19% of children having mothers aged 35 or older and 4% of children having mothers below the age of 20. Univariate analyses showed that mothers aged 35 or older were more likely to have gestational diabetes (odds ratio 1.9, 95% confidence interval 1.3 to 2.8), to have a history of miscarriage (odds ratio 1.8, 95% confidence interval 1.3 to 2.4), to have undergone fertility treatments (odds ratio 2.4, 95% confidence interval 1.5 to 3.9), and to have delivered by Caesarean section (odds ratio 1.6, 95% confidence interval 1.2 to 2.2). These findings were supported by multivariate analyses. Children with mothers below the age of 20 were more likely to have a congenital malformation (odds ratio 2.4, 95% confidence interval 1.4 to 4.2), which is also supported by multivariate analysis. The risk factor profiles of children with cerebral palsy vary by maternal age. Future studies are warranted to further our understanding of the compound causal pathways leading to cerebral palsy and the observed greater prevalence of cerebral palsy with increasing maternal age. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Priorities for treatment, care and information if faced with serious illness: a comparative population-based survey in seven European countries.

    PubMed

    Higginson, Irene J; Gomes, Barbara; Calanzani, Natalia; Gao, Wei; Bausewein, Claudia; Daveson, Barbara A; Deliens, Luc; Ferreira, Pedro L; Toscani, Franco; Gysels, Marjolein; Ceulemans, Lucas; Simon, Steffen T; Cohen, Joachim; Harding, Richard

    2014-02-01

    Health-care costs are growing, with little population-based data about people's priorities for end-of-life care, to guide service development and aid discussions. We examined variations in people's priorities for treatment, care and information across seven European countries. Telephone survey of a random sample of households; we asked respondents their priorities if 'faced with a serious illness, like cancer, with limited time to live' and used multivariable logistic regressions to identify associated factors. Members of the general public aged ≥ 16 years residing in England, Flanders, Germany, Italy, the Netherlands, Portugal and Spain. In total, 9344 individuals were interviewed. Most people chose 'improve quality of life for the time they had left', ranging from 57% (95% confidence interval: 55%-60%, Italy) to 81% (95% confidence interval: 79%-83%, Spain). Only 2% (95% confidence interval: 1%-3%, England) to 6% (95% confidence interval: 4%-7%, Flanders) said extending life was most important, and 15% (95% confidence interval: 13%-17%, Spain) to 40% (95% confidence interval: 37%-43%, Italy) said quality and extension were equally important. Prioritising quality of life was associated with higher education in all countries (odds ratio = 1.3 (Flanders) to 7.9 (Italy)), experience of caregiving or bereavement (England, Germany, Portugal), prioritising pain/symptom control over having a positive attitude and preferring death in a hospice/palliative care unit. Those prioritising extending life had the highest home death preference of all groups. Health status did not affect priorities. Across all countries, extending life was prioritised by a minority, regardless of health status. Treatment and care needs to be reoriented with patient education and palliative care becoming mainstream for serious conditions such as cancer.

  5. Air pollution attributable postneonatal infant mortality in U.S. metropolitan areas: a risk assessment study

    PubMed Central

    Kaiser, Reinhard; Romieu, Isabelle; Medina, Sylvia; Schwartz, Joel; Krzyzanowski, Michal; Künzli, Nino

    2004-01-01

    Background The impact of outdoor air pollution on infant mortality has not been quantified. Methods Based on exposure-response functions from a U.S. cohort study, we assessed the attributable risk of postneonatal infant mortality in 23 U.S. metropolitan areas related to particulate matter <10 μm in diameter (PM10) as a surrogate of total air pollution. Results The estimated proportion of all cause mortality, sudden infant death syndrome (normal birth weight infants only) and respiratory disease mortality (normal birth weight) attributable to PM10 above a chosen reference value of 12.0 μg/m3 PM10 was 6% (95% confidence interval 3–11%), 16% (95% confidence interval 9–23%) and 24% (95% confidence interval 7–44%), respectively. The expected number of infant deaths per year in the selected areas was 106 (95% confidence interval 53–185), 79 (95% confidence interval 46–111) and 15 (95% confidence interval 5–27), respectively. Approximately 75% of cases were from areas where the current levels are at or below the new U.S. PM2.5 standard of 15 μg/m3 (equivalent to 25 μg/m3 PM10). In a country where infant mortality rates and air pollution levels are relatively low, ambient air pollution as measured by particulate matter contributes to a substantial fraction of infant death, especially for those due to sudden infant death syndrome and respiratory disease. Even if all counties would comply to the new PM2.5 standard, the majority of the estimated burden would remain. Conclusion Given the inherent limitations of risk assessments, further studies are needed to support and quantify the relationship between infant mortality and air pollution. PMID:15128459

  6. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  7. Bootstrapping under constraint for the assessment of group behavior in human contact networks

    NASA Astrophysics Data System (ADS)

    Tremblay, Nicolas; Barrat, Alain; Forest, Cary; Nornberg, Mark; Pinton, Jean-François; Borgnat, Pierre

    2013-11-01

    The increasing availability of time- and space-resolved data describing human activities and interactions gives insights into both static and dynamic properties of human behavior. In practice, nevertheless, real-world data sets can often be considered as only one realization of a particular event. This highlights a key issue in social network analysis: the statistical significance of estimated properties. In this context, we focus here on the assessment of quantitative features of specific subset of nodes in empirical networks. We present a method of statistical resampling based on bootstrapping groups of nodes under constraints within the empirical network. The method enables us to define acceptance intervals for various null hypotheses concerning relevant properties of the subset of nodes under consideration in order to characterize by a statistical test its behavior as “normal” or not. We apply this method to a high-resolution data set describing the face-to-face proximity of individuals during two colocated scientific conferences. As a case study, we show how to probe whether colocating the two conferences succeeded in bringing together the two corresponding groups of scientists.

  8. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587-1996)

    NASA Astrophysics Data System (ADS)

    Beauval, Céline; Yepes, Hugo; Bakun, William H.; Egred, José; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-06-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw 8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (~2.5 millions inhabitants). A total population of ~6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587-1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mw between 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity >=VI) and 117 (Riobamba, 1797, Intensity >=III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (+/-1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding equivalent moment magnitudes between 5.0 and 7.6. Large earthquakes seem to be related to strike slip faults between the North Andean Block and stable South America to the east, while moderate earthquakes (Mw <= 6) seem to be associated with to thrust faults located on the western internal slopes of the Interandean Valley.

  9. Locations and magnitudes of historical earthquakes in the Sierra of Ecuador (1587–1996)

    USGS Publications Warehouse

    Beauval, Celine; Yepes, Hugo; Bakun, William H.; Egred, Jose; Alvarado, Alexandra; Singaucho, Juan-Carlos

    2010-01-01

    The whole territory of Ecuador is exposed to seismic hazard. Great earthquakes can occur in the subduction zone (e.g. Esmeraldas, 1906, Mw8.8), whereas lower magnitude but shallower and potentially more destructive earthquakes can occur in the highlands. This study focuses on the historical crustal earthquakes of the Andean Cordillera. Several large cities are located in the Interandean Valley, among them Quito, the capital (∼2.5 millions inhabitants). A total population of ∼6 millions inhabitants currently live in the highlands, raising the seismic risk. At present, precise instrumental data for the Ecuadorian territory is not available for periods earlier than 1990 (beginning date of the revised instrumental Ecuadorian seismic catalogue); therefore historical data are of utmost importance for assessing seismic hazard. In this study, the Bakun & Wentworth method is applied in order to determine magnitudes, locations, and associated uncertainties for historical earthquakes of the Sierra over the period 1587–1976. An intensity-magnitude equation is derived from the four most reliable instrumental earthquakes (Mwbetween 5.3 and 7.1). Intensity data available per historical earthquake vary between 10 (Quito, 1587, Intensity ≥VI) and 117 (Riobamba, 1797, Intensity ≥III). The bootstrap resampling technique is coupled to the B&W method for deriving geographical confidence contours for the intensity centre depending on the data set of each earthquake, as well as confidence intervals for the magnitude. The extension of the area delineating the intensity centre location at the 67 per cent confidence level (±1σ) depends on the amount of intensity data, on their internal coherence, on the number of intensity degrees available, and on their spatial distribution. Special attention is dedicated to the few earthquakes described by intensities reaching IX, X and XI degrees. Twenty-five events are studied, and nineteen new epicentral locations are obtained, yielding equivalent moment magnitudes between 5.0 and 7.6. Large earthquakes seem to be related to strike slip faults between the North Andean Block and stable South America to the east, while moderate earthquakes (Mw≤ 6) seem to be associated with to thrust faults located on the western internal slopes of the Interandean Valley.

  10. Estimation of αL, velocity, Kd and confidence limits from tracer injection test data

    USGS Publications Warehouse

    Broermann, James; Bassett, R.L.; Weeks, Edwin P.; Borgstrom, Mark

    1997-01-01

    Bromide and boron were used as tracers during an injection experiment conducted at an artificial recharge facility near Stanton, Texas. The Ogallala aquifer at the Stanton site represents a heterogeneous alluvial environment and provides the opportunity to report scale dependent dispersivities at observation distances of 2 to 15 m in this setting. Values of longitudinal dispersivities are compared with other published values. Water samples were collected at selected depths both from piezometers and from fully screened observation wells at radii of 2, 5, 10 and 15 m. An exact analytical solution is used to simulate the concentration breakthrough curves and estimate longitudinal dispersivities and velocity parameters. Greater confidence can be placed on these data because the estimated parameters are error bounded using the bootstrap method. The non-conservative behavior of boron transport in clay rich sections of the aquifer were quantified with distribution coefficients by using bromide as a conservative reference tracer.

  11. Estimation of αL, velocity, Kd, and confidence limits from tracer injection data

    USGS Publications Warehouse

    Broermann, James; Bassett, R.L.; Weeks, Edwin P.; Borgstrom, Mark

    1997-01-01

    Bromide and boron were used as tracers during an injection experiment conducted at an artificial recharge facility near Stanton, Texas. The Ogallala aquifer at the Stanton site represents a heterogeneous alluvial environment and provides the opportunity to report scale dependent dispersivities at observation distances of 2 to 15 m in this setting. Values of longitudinal dispersivities are compared with other published values. Water samples were collected at selected depths both from piezometers and from fully screened observation wells at radii of 2, 5, 10 and 15 m. An exact analytical solution is used to simulate the concentration breakthrough curves and estimate longitudinal dispersivities and velocity parameters. Greater confidence can be placed on these data because the estimated parameters are error bounded using the bootstrap method. The non-conservative behavior of boron transport in clay rich sections of the aquifer were quantified with distribution coefficients by using bromide as a conservative reference tracer.

  12. Teach a Confidence Interval for the Median in the First Statistics Course

    ERIC Educational Resources Information Center

    Howington, Eric B.

    2017-01-01

    Few introductory statistics courses consider statistical inference for the median. This article argues in favour of adding a confidence interval for the median to the first statistics course. Several methods suitable for introductory statistics students are identified and briefly reviewed.

  13. Source apportionment of methane and nitrous oxide in California's San Joaquin Valley at CalNex 2010 via positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Guha, A.; Gentner, D. R.; Weber, R. J.; Provencal, R.; Goldstein, A. H.

    2015-10-01

    Sources of methane (CH4) and nitrous oxide (N2O) were investigated using measurements from a site in southeast Bakersfield as part of the CalNex (California at the Nexus of Air Quality and Climate Change) experiment from mid-May to the end of June 2010. Typical daily minimum mixing ratios of CH4 and N2O were higher than daily minima that were simultaneously observed at a mid-oceanic background station (NOAA, Mauna Loa) by approximately 70 ppb and 0.5 ppb, respectively. Substantial enhancements of CH4 and N2O (hourly averages > 500 and > 7 ppb, respectively) were routinely observed, suggesting the presence of large regional sources. Collocated measurements of carbon monoxide (CO) and a range of volatile organic compounds (VOCs) (e.g., straight-chain and branched alkanes, cycloalkanes, chlorinated alkanes, aromatics, alcohols, isoprene, terpenes and ketones) were used with a positive matrix factorization (PMF) source apportionment method to estimate the contribution of regional sources to observed enhancements of CH4 and N2O. The PMF technique provided a "top-down" deconstruction of ambient gas-phase observations into broad source categories, yielding a seven-factor solution. We identified these emission source factors as follows: evaporative and fugitive; motor vehicles; livestock and dairy; agricultural and soil management; daytime light and temperature driven; non-vehicular urban; and nighttime terpene biogenics and anthropogenics. The dairy and livestock factor accounted for the majority of the CH4 (70-90 %) enhancements during the duration of experiments. The dairy and livestock factor was also a principal contributor to the daily enhancements of N2O (60-70 %). Agriculture and soil management accounted for ~ 20-25 % of N2O enhancements over a 24 h cycle, which is not surprising given that organic and synthetic fertilizers are known to be a major source of N2O. The N2O attribution to the agriculture and soil management factor had a high uncertainty in the conducted bootstrapping analysis. This is most likely due to an asynchronous pattern of soil-mediated N2O emissions from fertilizer usage and collocated biogenic emissions from crops from the surrounding agricultural operations that is difficult to apportion statistically when using PMF. The evaporative/fugitive source profile, which resembled a mix of petroleum operation and non-tailpipe evaporative gasoline sources, did not include a PMF resolved-CH4 contribution that was significant (< 2 %) compared to the uncertainty in the livestock-associated CH4 emissions. The uncertainty of the CH4 estimates in this source factor, derived from the bootstrapping analysis, is consistent with the ~ 3 % contribution of fugitive oil and gas emissions to the statewide CH4 inventory. The vehicle emission source factor broadly matched VOC profiles of on-road exhaust sources. This source factor had no statistically significant detected contribution to the N2O signals (confidence interval of 3 % of livestock N2O enhancements) and negligible CH4 (confidence interval of 4 % of livestock CH4 enhancements) in the presence of a dominant dairy and livestock factor. The CalNex PMF study provides a measurement-based assessment of the state CH4 and N2O inventories for the southern San Joaquin Valley (SJV). The state inventory attributes ~ 18 % of total N2O emissions to the transportation sector. Our PMF analysis directly contradicts the state inventory and demonstrates there were no discernible N2O emissions from the transportation sector in the southern SJV region.

  14. Increased calcium supplementation is associated with morbidity and mortality in the infant postoperative cardiac patient.

    PubMed

    Dyke, Peter C; Yates, Andrew R; Cua, Clifford L; Hoffman, Timothy M; Hayes, John; Feltes, Timothy F; Springer, Michelle A; Taeed, Roozbeh

    2007-05-01

    The purpose of this study was to assess the association of calcium replacement therapy with morbidity and mortality in infants after cardiac surgery involving cardiopulmonary bypass. Retrospective chart review. The cardiac intensive care unit at a tertiary care children's hospital. Infants undergoing cardiac surgery involving cardiopulmonary bypass between October 2002 and August 2004. None. Total calcium replacement (mg/kg calcium chloride given) for the first 72 postoperative hours was measured. Morbidity and mortality data were collected. The total volume of blood products given during the first 72 hrs was recorded. Infants with confirmed chromosomal deletions at the 22q11 locus were noted. Correlation and logistic regression analyses were used to generate odds ratios and 95% confidence intervals, with p < .05 being significant. One hundred seventy-one infants met inclusion criteria. Age was 4 +/- 3 months and weight was 4.9 +/- 1.7 kg at surgery. Six infants had deletions of chromosome 22q11. Infants who weighed less required more calcium replacement (r = -.28, p < .001). Greater calcium replacement correlated with a longer intensive care unit length of stay (r = .27, p < .001) and a longer total hospital length of stay (r = .23, p = .002). Greater calcium replacement was significantly associated with morbidity (liver dysfunction [odds ratio, 3.9; confidence interval, 2.1-7.3; p < .001], central nervous system complication [odds ratio, 1.8; confidence interval, 1.1-3.0; p = .02], infection [odds ratio, 1.5; confidence interval, 1.0-2.2; p < .04], extracorporeal membrane oxygenation [odds ratio, 5.0; confidence interval, 2.3-10.6; p < .001]) and mortality (odds ratio, 5.8; confidence interval, 5.8-5.9; p < .001). Greater calcium replacement was not associated with renal insufficiency (odds ratio, 1.5; confidence interval, 0.9-2.3; p = .07). Infants with >1 sd above the mean of total calcium replacement received on average fewer blood products than the total study population. Greater calcium replacement is associated with increasing morbidity and mortality. Further investigation of the etiology and therapy of hypocalcemia in this population is warranted.

  15. WITHDRAWN: Amnioinfusion for meconium-stained liquor in labour.

    PubMed

    Hofmeyr, G Justus

    2009-01-21

    Amnioinfusion aims to prevent or relieve umbilical cord compression during labour by infusing a solution into the uterine cavity. It is also thought to dilute meconium when present in the amniotic fluid and so reduce the risk of meconium aspiration. However, it may be that the mechanism of effect is that it corrects oligohydramnios (reduced amniotic fluid), for which thick meconium staining is a marker. The objective of this review was to assess the effects of amnioinfusion for meconium-stained liquor on perinatal outcome. The Cochrane Pregnancy and Childbirth Group trials register (October 2001) and the Cochrane Controlled Trials Register (Issue 3, 2001) were searched. Randomised trials comparing amnioinfusion with no amnioinfusion for women in labour with moderate or thick meconium-staining of the amniotic fluid. Eligibility and trial quality were assessed by one reviewer. Twelve studies, most involving small numbers of participants, were included. Under standard perinatal surveillance, amnioinfusion was associated with a reduction in the following: heavy meconium staining of the liquor (relative risk 0.03, 95% confidence interval 0.01 to 0.15); variable fetal heart rate deceleration (relative risk 0.65, 95% confidence interval 0.49 to 0.88); and reduced caesarean section overall (relative risk 0.82, 95% confidence interval 0.69 to 1.97). No perinatal deaths were reported. Under limited perinatal surveillance, amnioinfusion was associated with a reduction in the following: meconium aspiration syndrome (relative risk 0.24, 95% confidence interval 0.12 to 0.48); neonatal hypoxic ischaemic encephalopathy (relative risk 0.07, 95% confidence interval 0.01 to 0.56) and neonatal ventilation or intensive care unit admission (relative risk 0.56, 95% confidence interval 0.39 to 0.79); there was a trend towards reduced perinatal mortality (relative risk 0.34, 95% confidence interval 0.11 to 1.06). Amnioinfusion is associated with improvements in perinatal outcome, particularly in settings where facilities for perinatal surveillance are limited. The trials reviewed are too small to address the possibility of rare but serious maternal adverse effects of amnioinfusion.

  16. Reliability of clinical findings and magnetic resonance imaging for the diagnosis of chondromalacia patellae.

    PubMed

    Pihlajamäki, Harri K; Kuikka, Paavo-Ilari; Leppänen, Vesa-Veikko; Kiuru, Martti J; Mattila, Ville M

    2010-04-01

    This diagnostic study was performed to determine the correlation between anterior knee pain and chondromalacia patellae and to define the reliability of magnetic resonance imaging for the diagnosis of chondromalacia patellae. Fifty-six young adults (median age, 19.5 years) with anterior knee pain had magnetic resonance imaging of the knee followed by arthroscopy. The patellar chondral lesions identified by magnetic resonance imaging were compared with the arthroscopic findings. Arthroscopy confirmed the presence of chondromalacia patellae in twenty-five (45%) of the fifty-six knees, a synovial plica in twenty-five knees, a meniscal tear in four knees, and a femorotibial chondral lesion in four knees; normal anatomy was seen in six knees. No association was found between the severity of the chondromalacia patellae seen at arthroscopy and the clinical symptoms of anterior knee pain syndrome (p = 0.83). The positive predictive value for the ability of 1.0-T magnetic resonance imaging to detect chondromalacia patellae was 75% (95% confidence interval, 53% to 89%), the negative predictive value was 72% (95% confidence interval, 56% to 84%), the sensitivity was 60% (95% confidence interval, 41% to 77%), the specificity was 84% (95% confidence interval, 67% to 93%), and the diagnostic accuracy was 73% (95% confidence interval, 60% to 83%). The sensitivity was 13% (95% confidence interval, 2% to 49%) for grade-I lesions and 83% (95% confidence interval, 59% to 94%) for grade-II, III, or IV lesions. Chondromalacia patellae cannot be diagnosed on the basis of symptoms or with current physical examination methods. The present study demonstrated no correlation between the severity of chondromalacia patellae and the clinical symptoms of anterior knee pain syndrome. Thus, symptoms of anterior knee pain syndrome should not be used as an indication for knee arthroscopy. The sensitivity of 1.0-T magnetic resonance imaging was low for grade-I lesions but considerably higher for more severe (grade-II, III, or IV) lesions. Magnetic resonance imaging may be considered an accurate diagnostic tool for identification of more severe cases of chondromalacia patellae.

  17. Asymptomatic Intradialytic Supraventricular Arrhythmias and Adverse Outcomes in Patients on Hemodialysis

    PubMed Central

    Pérez de Prado, Armando; López-Gómez, Juan M.; Quiroga, Borja; Goicoechea, Marian; García-Prieto, Ana; Torres, Esther; Reque, Javier; Luño, José

    2016-01-01

    Background and objectives Supraventricular arrhythmias are associated with high morbidity and mortality. Nevertheless, this condition has received little attention in patients on hemodialysis. The objective of this study was to analyze the incidence of intradialysis supraventricular arrhythmia and its long–term prognostic value. Design, setting, participants, & measurements We designed an observational and prospective study in a cohort of patients on hemodialysis with a 10-year follow-up period. All patients were recruited for study participation and were not recruited for clinical indications. The study population comprised 77 patients (42 men and 35 women; mean age =58±15 years old) with sinus rhythm monitored using a Holter electrocardiogram over six consecutive hemodialysis sessions at recruitment. Results Hypertension was present in 68.8% of patients, and diabetes was present in 29.9% of patients. Supraventricular arrhythmias were recorded in 38 patients (49.3%); all of these were short, asymptomatic, and self-limiting. Age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08) and right atrial enlargement (hazard ratio, 4.29; 95% confidence interval, 1.30 to 14.09) were associated with supraventricular arrhythmia in the multivariate analysis. During a median follow-up of 40 months, 57 patients died, and cardiovascular disease was the main cause of death (52.6%). The variables associated with all-cause mortality in the Cox model were age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08), C-reactive protein (hazard ratio, 1.04 per 1 mg/L; 95% confidence interval, 1.00 to 1.08), and supraventricular arrhythmia (hazard ratio, 3.21; 95% confidence interval, 1.29 to 7.96). Patients with supraventricular arrhythmia also had a higher risk of nonfatal cardiovascular events (hazard ratio, 4.32; 95% confidence interval, 2.11 to 8.83) and symptomatic atrial fibrillation during follow-up (hazard ratio, 17.19; 95% confidence interval, 2.03 to 145.15). Conclusions The incidence of intradialysis supraventricular arrhythmia was high in our hemodialysis study population. Supraventricular arrhythmias were short, asymptomatic, and self-limiting, and although silent, these arrhythmias were independently associated with mortality and cardiovascular events. PMID:27697781

  18. Asymptomatic Intradialytic Supraventricular Arrhythmias and Adverse Outcomes in Patients on Hemodialysis.

    PubMed

    Verde, Eduardo; Pérez de Prado, Armando; López-Gómez, Juan M; Quiroga, Borja; Goicoechea, Marian; García-Prieto, Ana; Torres, Esther; Reque, Javier; Luño, José

    2016-12-07

    Supraventricular arrhythmias are associated with high morbidity and mortality. Nevertheless, this condition has received little attention in patients on hemodialysis. The objective of this study was to analyze the incidence of intradialysis supraventricular arrhythmia and its long-term prognostic value. We designed an observational and prospective study in a cohort of patients on hemodialysis with a 10-year follow-up period. All patients were recruited for study participation and were not recruited for clinical indications. The study population comprised 77 patients (42 men and 35 women; mean age =58±15 years old) with sinus rhythm monitored using a Holter electrocardiogram over six consecutive hemodialysis sessions at recruitment. Hypertension was present in 68.8% of patients, and diabetes was present in 29.9% of patients. Supraventricular arrhythmias were recorded in 38 patients (49.3%); all of these were short, asymptomatic, and self-limiting. Age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08) and right atrial enlargement (hazard ratio, 4.29; 95% confidence interval, 1.30 to 14.09) were associated with supraventricular arrhythmia in the multivariate analysis. During a median follow-up of 40 months, 57 patients died, and cardiovascular disease was the main cause of death (52.6%). The variables associated with all-cause mortality in the Cox model were age (hazard ratio, 1.04 per year; 95% confidence interval, 1.00 to 1.08), C-reactive protein (hazard ratio, 1.04 per 1 mg/L; 95% confidence interval, 1.00 to 1.08), and supraventricular arrhythmia (hazard ratio, 3.21; 95% confidence interval, 1.29 to 7.96). Patients with supraventricular arrhythmia also had a higher risk of nonfatal cardiovascular events (hazard ratio, 4.32; 95% confidence interval, 2.11 to 8.83) and symptomatic atrial fibrillation during follow-up (hazard ratio, 17.19; 95% confidence interval, 2.03 to 145.15). The incidence of intradialysis supraventricular arrhythmia was high in our hemodialysis study population. Supraventricular arrhythmias were short, asymptomatic, and self-limiting, and although silent, these arrhythmias were independently associated with mortality and cardiovascular events. Copyright © 2016 by the American Society of Nephrology.

  19. Long-term Results of an Obesity Program in an Ethnically Diverse Pediatric Population

    PubMed Central

    Nowicka, Paulina; Shaw, Melissa; Yu, Sunkyung; Dziura, James; Chavent, Georgia; O'Malley, Grace; Serrecchia, John B.; Tamborlane, William V.; Caprio, Sonia

    2011-01-01

    OBJECTIVE: To determine if beneficial effects of a weight-management program could be sustained for up to 24 months in a randomized trial in an ethnically diverse obese population. PATIENTS AND METHODS: There were 209 obese children (BMI > 95th percentile), ages 8 to 16 of mixed ethnic backgrounds randomly assigned to the intensive lifestyle intervention or clinic control group. The control group received counseling every 6 months, and the intervention group received a family-based program, which included exercise, nutrition, and behavior modification. Lifestyle intervention sessions occurred twice weekly for the first 6 months, then twice monthly for the second 6 months; for the last 12 months there was no active intervention. There were 174 children who completed the 12 months of the randomized trial. Follow-up data were available for 76 of these children at 24 months. There were no statistical differences in dropout rates among ethnic groups or in any other aspects. RESULTS: Treatment effect was sustained at 24 months in the intervention versus control group for BMI z score (−0.16 [95% confidence interval: −0.23 to −0.09]), BMI (−2.8 kg/m2 [95% confidence interval: −4.0–1.6 kg/m2]), percent body fat (−4.2% [95% confidence interval: −6.4% to −2.0%]), total body fat mass (−5.8 kg [95% confidence interval: −9.1 kg to −2.6 kg]), total cholesterol (−13.0 mg/dL [95% confidence interval: −21.7 mg/dL to −4.2 mg/dL]), low-density lipoprotein cholesterol (−10.4 mg/dL [95% confidence interval: −18.3 mg/dL to −2.4 mg/dL]), and homeostasis model assessment of insulin resistance (−2.05 [95% confidence interval: −2.48 to −1.75]). CONCLUSIONS: This study, unprecedented because of the high degree of obesity and ethnically diverse backgrounds of children, reveals that benefits of an intensive lifestyle program can be sustained 12 months after completing the active intervention phase. PMID:21300674

  20. Psychosocial and nonclinical factors predicting hospital utilization in patients of a chronic disease management program: a prospective observational study.

    PubMed

    Tran, Mark W; Weiland, Tracey J; Phillips, Georgina A

    2015-01-01

    Psychosocial factors such as marital status (odds ratio, 3.52; 95% confidence interval, 1.43-8.69; P = .006) and nonclinical factors such as outpatient nonattendances (odds ratio, 2.52; 95% confidence interval, 1.22-5.23; P = .013) and referrals made (odds ratio, 1.20; 95% confidence interval, 1.06-1.35; P = .003) predict hospital utilization for patients in a chronic disease management program. Along with optimizing patients' clinical condition by prescribed medical guidelines and supporting patient self-management, addressing psychosocial and nonclinical issues are important in attempting to avoid hospital utilization for people with chronic illnesses.

  1. A threshold method for immunological correlates of protection

    PubMed Central

    2013-01-01

    Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322

  2. Internet-Based Cognitive Behavioral Therapy for Insomnia: A Health Economic Evaluation

    PubMed Central

    Thiart, Hanne; Ebert, David Daniel; Lehr, Dirk; Nobis, Stephanie; Buntrock, Claudia; Berking, Matthias; Smit, Filip; Riper, Heleen

    2016-01-01

    Study Objectives: Lost productivity caused by insomnia is a common and costly problem for employers. Although evidence for the efficacy of Internet-based cognitive behavioral therapy for insomnia (iCBT-I) already exists, little is known about its economic effects. This study aims to evaluate the cost-effectiveness and cost-benefit of providing iCBT-I to symptomatic employees from the employer's perspective. Methods: School teachers (N = 128) with clinically significant insomnia symptoms and work-related rumination were randomized to guided iCBT-I or a waitlist-control-group, both with access to treatment as usual. Economic data were collected at baseline and 6-mo follow-up. We conducted (1) a cost-effectiveness analysis with treatment response (Reliable Change [decline of 5.01 points] and Insomnia Severity Index < 8 at 6-month follow-up) as the outcome and (2) a cost-benefit analysis. Because both analyses were performed from the employer's perspective, we focused specifically on absenteeism and presenteeism costs. Statistical uncertainty was estimated using bootstrapping. Results: Assuming intervention costs of €200 ($245), cost-effectiveness analyses showed that at a willingness-to-pay of €0 for each positive treatment response, there is an 87% probability that the intervention is more cost effective than treatment as usual alone. Cost-benefit analyses led to a net benefit of €418 (95% confidence interval: −593.03 to 1,488.70) ($512) per participant and a return on investment of 208% (95% confidence interval: −296.52 to 744.35). The reduction in costs was mainly driven by the effects of the intervention on presenteeism and to a lesser degree by reduced absenteeism. Conclusions: Focusing on sleep improvement using iCBT-I may be a cost-effective strategy in occupational health care. Clinical Trials Registration: Title: Online Recovery Training for Better Sleep in Teachers with High Psychological Strain. German Clinical Trial Register (DRKS), URL: https://drks-neu.uniklinik-freiburg.de/drks_web/navigate.do?navigationId=trial.HTML&TRIAL_ID=DRKS00004700. Identifier: DRKS00004700. Commentary: A commentary on this article appears in this issue on page 1767. Citation: Thiart H, Ebert DD, Lehr D, Nobis S, Buntrock C, Berking M, Smit F, Riper H. Internet-based cognitive behavioral therapy for insomnia: a health economic evaluation. SLEEP 2016;39(10):1769–1778. PMID:27450686

  3. The cost-effectiveness of providing antenatal lifestyle advice for women who are overweight or obese: the LIMIT randomised trial.

    PubMed

    Dodd, Jodie M; Ahmed, Sharmina; Karnon, Jonathan; Umberger, Wendy; Deussen, Andrea R; Tran, Thach; Grivell, Rosalie M; Crowther, Caroline A; Turnbull, Deborah; McPhee, Andrew J; Wittert, Gary; Owens, Julie A; Robinson, Jeffrey S

    2015-01-01

    Overweight and obesity during pregnancy is common, although robust evidence about the economic implications of providing an antenatal dietary and lifestyle intervention for women who are overweight or obese is lacking. We conducted a health economic evaluation in parallel with the LIMIT randomised trial. Women with a singleton pregnancy, between 10(+0)-20(+0) weeks, and BMI ≥25 kg/m(2) were randomised to Lifestyle Advice (a comprehensive antenatal dietary and lifestyle intervention) or Standard Care. The economic evaluation took the perspective of the health care system and its patients, and compared costs encountered from the additional use of resources from time of randomisation until six weeks postpartum. Increments in health outcomes for both the woman and infant were considered in the cost-effectiveness analysis. Mean costs and effects in the treatment groups allocated at randomisation were compared, and incremental cost effectiveness ratios (ICERs) and confidence intervals (95%) calculated. Bootstrapping was used to confirm the estimated confidence intervals, and to generate acceptability curves representing the probability of the intervention being cost-effective at alternative monetary equivalent values for the outcomes avoiding high infant birth weight, and respiratory distress syndrome. Analyses utilised intention to treat principles. Overall, the increase in mean costs associated with providing the intervention was offset by savings associated with improved immediate neonatal outcomes, rendering the intervention cost neutral (Lifestyle Advice Group $11261.19±$14573.97 versus Standard Care Group $11306.70±$14562.02; p=0.094). Using a monetary value of $20,000 as a threshold value for avoiding an additional infant with birth weight above 4 kg, the probability that the antenatal intervention is cost-effective is 0.85, which increases to 0.95 when the threshold monetary value increases to $45,000. Providing an antenatal dietary and lifestyle intervention for pregnant women who are overweight or obese is not associated with increased costs or cost savings, but is associated with a high probability of cost effectiveness. Ongoing participant follow-up into childhood is required to determine the medium to long-term impact of the observed, short-term endpoints, to more accurately estimate the value of the intervention on risk of obesity, and associated costs and health outcomes. Australian and New Zealand Clinical Trials Registry (ACTRN12607000161426).

  4. Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models

    PubMed Central

    2014-01-01

    Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829

  5. Tests of Independence for Ordinal Data Using Bootstrap.

    ERIC Educational Resources Information Center

    Chan, Wai; Yung, Yiu-Fai; Bentler, Peter M.; Tang, Man-Lai

    1998-01-01

    Two bootstrap tests are proposed to test the independence hypothesis in a two-way cross table. Monte Carlo studies are used to compare the traditional asymptotic test with these bootstrap methods, and the bootstrap methods are found superior in two ways: control of Type I error and statistical power. (SLD)

  6. Persistent opioid use following Cesarean delivery: patterns and predictors among opioid naïve women

    PubMed Central

    Bateman, Brian T.; Franklin, Jessica M.; Bykov, Katsiaryna; Avorn, Jerry; Shrank, William H.; Brennan, Troyen A.; Landon, Joan E.; Rathmell, James P.; Huybrechts, Krista F.; Fischer, Michael A.; Choudhry, Niteesh K.

    2016-01-01

    Background The incidence of opioid-related death in women has increased five-fold over the past decade. For many women, their initial opioid exposure will occur in the setting of routine medical care. Approximately 1 in 3 deliveries in the U.S. is by Cesarean and opioids are commonly prescribed for post-surgical pain management. Objective The objective of this study was to determine the risk that opioid naïve women prescribed opioids after Cesarean delivery will subsequently become consistent prescription opioid users in the year following delivery, and to identify predictors for this behavior. Study Design We identified women in a database of commercial insurance beneficiaries who underwent Cesarean delivery and who were opioid-naïve in the year prior to delivery. To identify persistent users of opioids, we used trajectory models, which group together patients with similar patterns of medication filling during follow-up, based on patterns of opioid dispensing in the year following Cesarean delivery. We then constructed a multivariable logistic regression model to identify independent risk factors for membership in the persistent user group. Results 285 of 80,127 (0.36%, 95% confidence interval 0.32 to 0.40), opioid-naïve women became persistent opioid users (identified using trajectory models based on monthly patterns of opioid dispensing) following Cesarean delivery. Demographics and baseline comorbidity predicted such use with moderate discrimination (c statistic = 0.73). Significant predictors included a history of cocaine abuse (risk 7.41%; adjusted odds ratio 6.11, 95% confidence interval 1.03 to 36.31) and other illicit substance abuse (2.36%; adjusted odds ratio 2.78, 95% confidence interval 1.12 to 6.91), tobacco use (1.45%; adjusted odds ratio 3.04, 95% confidence interval 2.03 to 4.55), back pain (0.69%; adjusted odds ratio 1.74, 95% confidence interval 1.33 to 2.29), migraines (0.91%; adjusted odds ratio 2.14, 95% confidence interval 1.58 to 2.90), antidepressant use (1.34%; adjusted odds ratio 3.19, 95% confidence interval 2.41 to 4.23) and benzodiazepine use (1.99%; adjusted odds ratio 3.72, 95% confidence interval 2.64 to 5.26) in the year prior to Cesarean delivery. Conclusions A very small proportion of opioid-naïve women (approximately 1 in 300) become persistent prescription opioid users following Cesarean delivery. Pre-existing psychiatric comorbidity, certain pain conditions, and substance use/abuse conditions identifiable at the time of initial opioid prescribing were predictors of persistent use. PMID:26996986

  7. Emergency department patient satisfaction survey in Imam Reza Hospital, Tabriz, Iran

    PubMed Central

    2011-01-01

    Introduction Patient satisfaction is an important indicator of the quality of care and service delivery in the emergency department (ED). The objective of this study was to evaluate patient satisfaction with the Emergency Department of Imam Reza Hospital in Tabriz, Iran. Methods This study was carried out for 1 week during all shifts. Trained researchers used the standard Press Ganey questionnaire. Patients were asked to complete the questionnaire prior to discharge. The study questionnaire included 30 questions based on a Likert scale. Descriptive and analytical statistics were used throughout data analysis in a number of ways using SPSS version 13. Results Five hundred patients who attended our ED were included in this study. The highest satisfaction rates were observed in the terms of physicians' communication with patients (82.5%), security guards' courtesy (78.3%) and nurses' communication with patients (78%). The average waiting time for the first visit to a physician was 24 min 15 s. The overall satisfaction rate was dependent on the mean waiting time. The mean waiting time for a low rate of satisfaction was 47 min 11 s with a confidence interval of (19.31, 74.51), and for very good level of satisfaction it was 14 min 57 s with a (10.58, 18.57) confidence interval. Approximately 63% of the patients rated their general satisfaction with the emergency setting as good or very good. On the whole, the patient satisfaction rate at the lowest level was 7.7 with a confidence interval of (5.1, 10.4), and at the low level it was 5.8% with a confidence interval of (3.7, 7.9). The rate of satisfaction for the mediocre level was 23.3 with a confidence interval of (19.1, 27.5); for the high level of satisfaction it was 28.3 with a confidence interval of (22.9, 32.8), and for the very high level of satisfaction, this rate was 32.9% with a confidence interval of (28.4, 37.4). Conclusion The study findings indicated the need for evidence-based interventions in emergency care services in areas such as medical care, nursing care, courtesy of staff, physical comfort and waiting time. Efforts should focus on shortening waiting intervals and improving patients' perceptions about waiting in the ED, and also improving the overall cleanliness of the emergency room. PMID:21407998

  8. One- and two-stage Arrhenius models for pharmaceutical shelf life prediction.

    PubMed

    Fan, Zhewen; Zhang, Lanju

    2015-01-01

    One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.

  9. Resampling methods in Microsoft Excel® for estimating reference intervals

    PubMed Central

    Theodorsson, Elvar

    2015-01-01

    Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles.
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366

  10. Resampling methods in Microsoft Excel® for estimating reference intervals.

    PubMed

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  11. Comparison of WBRT alone, SRS alone, and their combination in the treatment of one or more brain metastases: Review and meta-analysis.

    PubMed

    Khan, Muhammad; Lin, Jie; Liao, Guixiang; Li, Rong; Wang, Baiyao; Xie, Guozhu; Zheng, Jieling; Yuan, Yawei

    2017-07-01

    Whole brain radiotherapy has been a standard treatment of brain metastases. Stereotactic radiosurgery provides more focal and aggressive radiation and normal tissue sparing but worse local and distant control. This meta-analysis was performed to assess and compare the effectiveness of whole brain radiotherapy alone, stereotactic radiosurgery alone, and their combination in the treatment of brain metastases based on randomized controlled trial studies. Electronic databases (PubMed, MEDLINE, Embase, and Cochrane Library) were searched to identify randomized controlled trial studies that compared treatment outcome of whole brain radiotherapy and stereotactic radiosurgery. This meta-analysis was performed using the Review Manager (RevMan) software (version 5.2) that is provided by the Cochrane Collaboration. The data used were hazard ratios with 95% confidence intervals calculated for time-to-event data extracted from survival curves and local tumor control rate curves. Odds ratio with 95% confidence intervals were calculated for dichotomous data, while mean differences with 95% confidence intervals were calculated for continuous data. Fixed-effects or random-effects models were adopted according to heterogeneity. Five studies (n = 763) were included in this meta-analysis meeting the inclusion criteria. All the included studies were randomized controlled trials. The sample size ranged from 27 to 331. In total 202 (26%) patients with whole brain radiotherapy alone, 196 (26%) patients receiving stereotactic radiosurgery alone, and 365 (48%) patients were in whole brain radiotherapy plus stereotactic radiosurgery group. No significant survival benefit was observed for any treatment approach; hazard ratio was 1.19 (95% confidence interval: 0.96-1.43, p = 0.12) based on three randomized controlled trials for whole brain radiotherapy only compared to whole brain radiotherapy plus stereotactic radiosurgery and hazard ratio was 1.03 (95% confidence interval: 0.82-1.29, p = 0.81) for stereotactic radiosurgery only compared to combined approach. Local control was best achieved when whole brain radiotherapy was combined with stereotactic radiosurgery. Hazard ratio 2.05 (95% confidence interval: 1.36-3.09, p = 0.0006) and hazard ratio 1.84 (95% confidence interval: 1.26-2.70, p = 0.002) were obtained from comparing whole brain radiotherapy only and stereotactic radiosurgery only to whole brain radiotherapy + stereotactic radiosurgery, respectively. No difference in adverse events for treatment difference; odds ratio 1.16 (95% confidence interval: 0.77-1.76, p = 0.48) and odds ratio 0.92 (95% confidence interval: 0.59-1.42, p = 71) for whole brain radiotherapy + stereotactic radiosurgery versus whole brain radiotherapy only and whole brain radiotherapy + stereotactic radiosurgery versus stereotactic radiosurgery only, respectively. Adding stereotactic radiosurgery to whole brain radiotherapy provides better local control as compared to whole brain radiotherapy only and stereotactic radiosurgery only with no difference in radiation related toxicities.

  12. Maternal and neonatal outcomes after bariatric surgery; a systematic review and meta-analysis: do the benefits outweigh the risks?

    PubMed

    Kwong, Wilson; Tomlinson, George; Feig, Denice S

    2018-02-15

    Obesity during pregnancy is associated with a number of adverse obstetric outcomes that include gestational diabetes mellitus, macrosomia, and preeclampsia. Increasing evidence shows that bariatric surgery may decrease the risk of these outcomes. Our aim was to evaluate the benefits and risks of bariatric surgery in obese women according to obstetric outcomes. We performed a systematic literature search using MEDLINE, Embase, Cochrane, Web of Science, and PubMed from inception up to December 12, 2016. Studies were included if they evaluated patients who underwent bariatric surgery, reported subsequent pregnancy outcomes, and compared these outcomes with a control group. Two reviewers extracted study outcomes independently, and risk of bias was assessed with the use of the Newcastle-Ottawa Quality Assessment Scale. Pooled odds ratios for each outcome were estimated with the Dersimonian and Laird random effects model. After a review of 2616 abstracts, 20 cohort studies and approximately 2.8 million subjects (8364 of whom had bariatric surgery) were included in the metaanalysis. In our primary analysis, patients who underwent bariatric surgery showed reduced rates of gestational diabetes mellitus (odds ratio, 0.20; 95% confidence interval, 0.11-0.37, number needed to benefit, 5), large-for-gestational-age infants (odds ratio, 0.31; 95% confidence interval, 0.17-0.59; number needed to benefit, 6), gestational hypertension (odds ratio, 0.38; 95% confidence interval, 0.19-0.76; number needed to benefit, 11), all hypertensive disorders (odds ratio, 0.38; 95% confidence interval, 0.27-0.53; number needed to benefit, 8), postpartum hemorrhage (odds ratio, 0.32; 95% confidence interval, 0.08-1.37; number needed to benefit, 21), and caesarean delivery rates (odds ratio, 0.50; 95% confidence interval, 0.38-0.67; number needed to benefit, 9); however, group of patients showed an increase in small-for-gestational-age infants (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 21), intrauterine growth restriction (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 66), and preterm deliveries (odds ratio, 1.35; 95% confidence interval, 1.02-1.79; number needed to harm, 35) when compared with control subjects who were matched for presurgery body mass index. There were no differences in rates of preeclampsia, neonatal intensive care unit admissions, stillbirths, malformations, and neonatal death. Malabsorptive surgeries resulted in a greater increase in small-for-gestational-age infants (P=.0466) and a greater decrease in large-for-gestational-age infants (P=<.0001) compared with restrictive surgeries. There were no differences in outcomes when we used administrative databases vs clinical charts. Although bariatric surgery is associated with a reduction in the risk of several adverse obstetric outcomes, there is a potential for an increased risk of other important outcomes that should be considered when bariatric surgery is discussed with reproductive-age women. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Monthly water quality forecasting and uncertainty assessment via bootstrapped wavelet neural networks under missing data for Harbin, China.

    PubMed

    Wang, Yi; Zheng, Tong; Zhao, Ying; Jiang, Jiping; Wang, Yuanyuan; Guo, Liang; Wang, Peng

    2013-12-01

    In this paper, bootstrapped wavelet neural network (BWNN) was developed for predicting monthly ammonia nitrogen (NH(4+)-N) and dissolved oxygen (DO) in Harbin region, northeast of China. The Morlet wavelet basis function (WBF) was employed as a nonlinear activation function of traditional three-layer artificial neural network (ANN) structure. Prediction intervals (PI) were constructed according to the calculated uncertainties from the model structure and data noise. Performance of BWNN model was also compared with four different models: traditional ANN, WNN, bootstrapped ANN, and autoregressive integrated moving average model. The results showed that BWNN could handle the severely fluctuating and non-seasonal time series data of water quality, and it produced better performance than the other four models. The uncertainty from data noise was smaller than that from the model structure for NH(4+)-N; conversely, the uncertainty from data noise was larger for DO series. Besides, total uncertainties in the low-flow period were the biggest due to complicated processes during the freeze-up period of the Songhua River. Further, a data missing-refilling scheme was designed, and better performances of BWNNs for structural data missing (SD) were observed than incidental data missing (ID). For both ID and SD, temporal method was satisfactory for filling NH(4+)-N series, whereas spatial imputation was fit for DO series. This filling BWNN forecasting method was applied to other areas suffering "real" data missing, and the results demonstrated its efficiency. Thus, the methods introduced here will help managers to obtain informed decisions.

  14. Uncertainty in positive matrix factorization solutions for PAHs in surface sediments of the Yangtze River Estuary in different seasons.

    PubMed

    Liu, Ruimin; Men, Cong; Yu, Wenwen; Xu, Fei; Wang, Qingrui; Shen, Zhenyao

    2018-01-01

    To examine the variabilities of source contributions in the Yangtze River Estuary (YRE), the uncertainty based on the positive matrix factorization (PMF) was applied to the source apportionment of the 16 priority PAHs in 120 surface sediment samples from four seasons. Based on the signal-to-noise ratios, the PAHs categorized as "Bad" might drop out of the estimation of bootstrap. Next, the spatial variability of residuals was applied to determine which species with non-normal curves should be excluded. The median values from the bootstrapped solutions were chosen as the best estimate of the true factor contributions, and the intervals from 5th to 95th percentile represent the variability in each sample factor contribution. Based on the results, the median factor contributions of wood grass combustion and coke plant emissions were highly correlated with the variability (R 2  = 0.6797-0.9937) in every season. Meanwhile, the factor of coal and gasoline combustion had large variability with lower R 2 values in every season, especially in summer (0.4784) and winter (0.2785). The coefficient of variation (CV) values based on the Bootstrap (BS) simulations were applied to indicate the uncertainties of PAHs in every factor of each season. Acy, NaP and BgP always showed higher CV values, which suggested higher uncertainties in the BS simulations, and the PAH with the lowest concentration among all PAHs usually became the species with higher uncertainties. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  17. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  18. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  19. Women seeking treatment for advanced pelvic organ prolapse have decreased body image and quality of life.

    PubMed

    Jelovsek, J Eric; Barber, Matthew D

    2006-05-01

    Women who seek treatment for pelvic organ prolapse strive for an improvement in quality of life. Body image has been shown to be an important component of differences in quality of life. To date, there are no data on body image in patients with advanced pelvic organ prolapse. Our objective was to compare body image and quality of life in women with advanced pelvic organ prolapse with normal controls. We used a case-control study design. Cases were defined as subjects who presented to a tertiary urogynecology clinic with advanced pelvic organ prolapse (stage 3 or 4). Controls were defined as subjects who presented to a tertiary care gynecology or women's health clinic for an annual visit with normal pelvic floor support (stage 0 or 1) and without urinary incontinence. All patients completed a valid and reliable body image scale and a generalized (Short Form Health Survey) and condition-specific (Pelvic Floor Distress Inventory-20) quality-of-life scale. Linear and logistic regression analyses were performed to adjust for possible confounding variables. Forty-seven case and 51 control subjects were enrolled. After controlling for age, race, parity, previous hysterectomy, and medical comorbidities, subjects with advanced pelvic organ prolapse were more likely to feel self-conscious (adjusted odds ratio 4.7; 95% confidence interval 1.4 to 18, P = .02), less likely to feel physically attractive (adjusted odds ratio 11; 95% confidence interval 2.9 to 51, P < .001), less likely to feel feminine (adjusted odds ratio 4.0; 95% confidence interval 1.2 to 15, P = .03), and less likely to feel sexually attractive (adjusted odds ratio 4.6; 95% confidence interval 1.4 to 17, P = .02) than normal controls. The groups were similar in their feeling of dissatisfaction with appearance when dressed, difficulty looking at themselves naked, avoiding people because of appearance, and overall dissatisfaction with their body. Subjects with advanced pelvic organ prolapse suffered significantly lower quality of life on the physical scale of the SF-12 (mean 42; 95% confidence interval 39 to 45 versus mean 50; 95% confidence interval 47 to 53, P < .009). However, no differences between groups were noted on the mental scale of the SF-12 (mean 51; 95% confidence interval 50 to 54 versus mean 50; 95% confidence interval 47 to 52, P = .56). Additionally, subjects with advanced pelvic organ prolapse scored significantly worse on the prolapse, urinary, and colorectal scales and overall summary score of Pelvic Floor Distress Inventory-20 than normal controls (mean summary score 104; 95% confidence interval 90 to 118 versus mean 29; 95% confidence interval 16 to 43, P < .0001), indicating a decrease in condition-specific quality of life. Worsening body image correlated with lower quality of life on both the physical and mental scales of the SF-12 as well as the prolapse, urinary, and colorectal scales and overall summary score of Pelvic Floor Distress Inventory-20 in subjects with advanced pelvic organ prolapse. Women seeking treatment for advanced pelvic organ prolapse have decreased body image and overall quality of life. Body image may be a key determinant for quality of life in patients with advanced prolapse and may be an important outcome measure for treatment evaluation in clinical trials.

  20. Exercise during pregnancy in normal-weight women and risk of preterm birth: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Di Mascio, Daniele; Magro-Malosso, Elena Rita; Saccone, Gabriele; Marhefka, Gregary D; Berghella, Vincenzo

    2016-11-01

    Preterm birth is the major cause of perinatal mortality in the United States. In the past, pregnant women have been recommended to not exercise because of presumed risks of preterm birth. Physical activity has been theoretically related to preterm birth because it increases the release of catecholamines, especially norepinephrine, which might stimulate myometrial activity. Conversely, exercise may reduce the risk of preterm birth by other mechanisms such as decreased oxidative stress or improved placenta vascularization. Therefore, the safety of exercise regarding preterm birth and its effects on gestational age at delivery remain controversial. The objective of the study was to evaluate the effects of exercise during pregnancy on the risk of preterm birth. MEDLINE, EMBASE, Web of Sciences, Scopus, ClinicalTrial.gov, OVID, and Cochrane Library were searched from the inception of each database to April 2016. Selection criteria included only randomized clinical trials of pregnant women randomized before 23 weeks to an aerobic exercise regimen or not. Types of participants included women of normal weight with uncomplicated, singleton pregnancies without any obstetric contraindication to physical activity. The summary measures were reported as relative risk or as mean difference with 95% confidence intervals. The primary outcome was the incidence of preterm birth <37 weeks. Of the 2059 women included in the meta-analysis, 1022 (49.6%) were randomized to the exercise group and 1037 (50.4%) to the control group. Aerobic exercise lasted about 35-90 minutes 3-4 times per week. Women who were randomized to aerobic exercise had a similar incidence of preterm birth of <37 weeks (4.5% vs 4.4%; relative risk, 1.01, 95% confidence interval, 0.68-1.50) and a similar mean gestational age at delivery (mean difference, 0.05 week, 95% confidence interval, -0.07 to 0.17) compared with controls. Women in the exercise group had a significantly higher incidence of vaginal delivery (73.6% vs 67.5%; relative risk, 1.09, 95% confidence interval, 1.04-1.15) and a significantly lower incidence of cesarean delivery (17.9% vs 22%; relative risk, 0.82, 95% confidence interval, 0.69-0.97) compared with controls. The incidence of operative vaginal delivery (12.9% vs 16.5%; relative risk, 0.78, 95% confidence interval, 0.61-1.01) was similar in both groups. Women in the exercise group had a significantly lower incidence of gestational diabetes mellitus (2.9% vs 5.6%; relative risk, 0.51, 95% confidence interval, 0.31-0.82) and a significantly lower incidence of hypertensive disorders (1.0% vs 5.6%; relative risk, 0.21, 95% confidence interval, 0.09-0.45) compared with controls. No differences in low birthweight (5.2% vs 4.7%; relative risk, 1.11, 95% confidence interval, 0.72-1.73) and mean birthweight (mean difference, -10.46 g, 95% confidence interval, -47.10 to 26.21) between the exercise group and controls were found. Aerobic exercise for 35-90 minutes 3-4 times per week during pregnancy can be safely performed by normal-weight women with singleton, uncomplicated gestations because this is not associated with an increased risk of preterm birth or with a reduction in mean gestational age at delivery. Exercise was associated with a significantly higher incidence of vaginal delivery and a significantly lower incidence of cesarean delivery, with a significantly lower incidence of gestational diabetes mellitus and hypertensive disorders and therefore should be encouraged. Copyright © 2016. Published by Elsevier Inc.

  1. Does Bootstrap Procedure Provide Biased Estimates? An Empirical Examination for a Case of Multiple Regression.

    ERIC Educational Resources Information Center

    Fan, Xitao

    This paper empirically and systematically assessed the performance of bootstrap resampling procedure as it was applied to a regression model. Parameter estimates from Monte Carlo experiments (repeated sampling from population) and bootstrap experiments (repeated resampling from one original bootstrap sample) were generated and compared. Sample…

  2. Using the Descriptive Bootstrap to Evaluate Result Replicability (Because Statistical Significance Doesn't)

    ERIC Educational Resources Information Center

    Spinella, Sarah

    2011-01-01

    As result replicability is essential to science and difficult to achieve through external replicability, the present paper notes the insufficiency of null hypothesis statistical significance testing (NHSST) and explains the bootstrap as a plausible alternative, with a heuristic example to illustrate the bootstrap method. The bootstrap relies on…

  3. Rapid and accurate taxonomic classification of insect (class Insecta) cytochrome c oxidase subunit 1 (COI) DNA barcode sequences using a naïve Bayesian classifier

    PubMed Central

    Porter, Teresita M; Gibson, Joel F; Shokralla, Shadi; Baird, Donald J; Golding, G Brian; Hajibabaei, Mehrdad

    2014-01-01

    Current methods to identify unknown insect (class Insecta) cytochrome c oxidase (COI barcode) sequences often rely on thresholds of distances that can be difficult to define, sequence similarity cut-offs, or monophyly. Some of the most commonly used metagenomic classification methods do not provide a measure of confidence for the taxonomic assignments they provide. The aim of this study was to use a naïve Bayesian classifier (Wang et al. Applied and Environmental Microbiology, 2007; 73: 5261) to automate taxonomic assignments for large batches of insect COI sequences such as data obtained from high-throughput environmental sequencing. This method provides rank-flexible taxonomic assignments with an associated bootstrap support value, and it is faster than the blast-based methods commonly used in environmental sequence surveys. We have developed and rigorously tested the performance of three different training sets using leave-one-out cross-validation, two field data sets, and targeted testing of Lepidoptera, Diptera and Mantodea sequences obtained from the Barcode of Life Data system. We found that type I error rates, incorrect taxonomic assignments with a high bootstrap support, were already relatively low but could be lowered further by ensuring that all query taxa are actually present in the reference database. Choosing bootstrap support cut-offs according to query length and summarizing taxonomic assignments to more inclusive ranks can also help to reduce error while retaining the maximum number of assignments. Additionally, we highlight gaps in the taxonomic and geographic representation of insects in public sequence databases that will require further work by taxonomists to improve the quality of assignments generated using any method.

  4. From genus to phylum: large-subunit and internal transcribed spacer rRNA operon regions show similar classification accuracies influenced by database composition.

    PubMed

    Porras-Alfaro, Andrea; Liu, Kuan-Liang; Kuske, Cheryl R; Xie, Gary

    2014-02-01

    We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5' section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets.

  5. From Genus to Phylum: Large-Subunit and Internal Transcribed Spacer rRNA Operon Regions Show Similar Classification Accuracies Influenced by Database Composition

    PubMed Central

    Liu, Kuan-Liang; Kuske, Cheryl R.

    2014-01-01

    We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5′ section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets. PMID:24242255

  6. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  7. Performance of Bootstrapping Approaches To Model Test Statistics and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2001-01-01

    Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…

  8. Nonparametric bootstrap analysis with applications to demographic effects in demand functions.

    PubMed

    Gozalo, P L

    1997-12-01

    "A new bootstrap proposal, labeled smooth conditional moment (SCM) bootstrap, is introduced for independent but not necessarily identically distributed data, where the classical bootstrap procedure fails.... A good example of the benefits of using nonparametric and bootstrap methods is the area of empirical demand analysis. In particular, we will be concerned with their application to the study of two important topics: what are the most relevant effects of household demographic variables on demand behavior, and to what extent present parametric specifications capture these effects." excerpt

  9. Effects of magnetic islands on bootstrap current in toroidal plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, G.; Lin, Z.

    The effects of magnetic islands on electron bootstrap current in toroidal plasmas are studied using gyrokinetic simulations. The magnetic islands cause little changes of the bootstrap current level in the banana regime because of trapped electron effects. In the plateau regime, the bootstrap current is completely suppressed at the island centers due to the destruction of trapped electron orbits by collisions and the flattening of pressure profiles by the islands. In the collisional regime, small but finite bootstrap current can exist inside the islands because of the pressure gradients created by large collisional transport across the islands. Lastly, simulation resultsmore » show that the bootstrap current level increases near the island separatrix due to steeper local density gradients.« less

  10. Effects of magnetic islands on bootstrap current in toroidal plasmas

    DOE PAGES

    Dong, G.; Lin, Z.

    2016-12-19

    The effects of magnetic islands on electron bootstrap current in toroidal plasmas are studied using gyrokinetic simulations. The magnetic islands cause little changes of the bootstrap current level in the banana regime because of trapped electron effects. In the plateau regime, the bootstrap current is completely suppressed at the island centers due to the destruction of trapped electron orbits by collisions and the flattening of pressure profiles by the islands. In the collisional regime, small but finite bootstrap current can exist inside the islands because of the pressure gradients created by large collisional transport across the islands. Lastly, simulation resultsmore » show that the bootstrap current level increases near the island separatrix due to steeper local density gradients.« less

  11. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    PubMed

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  12. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  13. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  14. Characterizing the Mathematics Anxiety Literature Using Confidence Intervals as a Literature Review Mechanism

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce

    2010-01-01

    The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…

  15. Statistical inference for remote sensing-based estimates of net deforestation

    Treesearch

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  16. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  17. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  18. The impact of effort-reward imbalance on quality of life among Japanese working men.

    PubMed

    Watanabe, Mayumi; Tanaka, Katsutoshi; Aratake, Yutaka; Kato, Noritada; Sakata, Yumi

    2008-07-01

    Health-related quality of life (HRQL) is an important measure of health outcome in working and healthy populations. Here, we investigated the impact of effort-reward imbalance (ERI), a representative work-stress model, on HRQL of Japanese working men. The study targeted 1,096 employees from a manufacturing plant in Japan. To assess HRQL and ERI, participants were surveyed using the Japanese version of the Short-Form 8 Health Survey (SF-8) and effort-reward imbalance model. Of the 1,096 employees, 1,057 provided valid responses to the questionnaire. For physical summary scores, the adjusted effort-reward imbalance odds ratios of middle vs. bottom and top vs. bottom tertiles were 0.24 (95% confidence interval, 0.08-0.70) and 0.09 (95% confidence interval, 0.03-0.28), respectively. For mental summary scores, ratios were 0.21 (95% confidence interval, 0.07-0.63) and 0.07 (95% confidence interval, 0.02-0.25), respectively. These findings demonstrate that effort-reward imbalance is independently associated with HRQL among Japanese employees.

  19. The Success of Linear Bootstrapping Models: Decision Domain-, Expertise-, and Criterion-Specific Meta-Analysis

    PubMed Central

    Kaufmann, Esther; Wittmann, Werner W.

    2016-01-01

    The success of bootstrapping or replacing a human judge with a model (e.g., an equation) has been demonstrated in Paul Meehl’s (1954) seminal work and bolstered by the results of several meta-analyses. To date, however, analyses considering different types of meta-analyses as well as the potential dependence of bootstrapping success on the decision domain, the level of expertise of the human judge, and the criterion for what constitutes an accurate decision have been missing from the literature. In this study, we addressed these research gaps by conducting a meta-analysis of lens model studies. We compared the results of a traditional (bare-bones) meta-analysis with findings of a meta-analysis of the success of bootstrap models corrected for various methodological artifacts. In line with previous studies, we found that bootstrapping was more successful than human judgment. Furthermore, bootstrapping was more successful in studies with an objective decision criterion than in studies with subjective or test score criteria. We did not find clear evidence that the success of bootstrapping depended on the decision domain (e.g., education or medicine) or on the judge’s level of expertise (novice or expert). Correction of methodological artifacts increased the estimated success of bootstrapping, suggesting that previous analyses without artifact correction (i.e., traditional meta-analyses) may have underestimated the value of bootstrapping models. PMID:27327085

  20. Sex hormones and the risk of type 2 diabetes mellitus: A 9-year follow up among elderly men in Finland.

    PubMed

    Salminen, Marika; Vahlberg, Tero; Räihä, Ismo; Niskanen, Leo; Kivelä, Sirkka-Liisa; Irjala, Kerttu

    2015-05-01

    To analyze whether sex hormone levels predict the incidence of type2 diabetes among elderly Finnish men. This was a prospective population-based study, with a 9-year follow up period. The study population in the municipality of Lieto, Finland, consisted of elderly (age ≥64 years) men free of type 2 diabetes at baseline in 1998-1999 (n = 430). Body mass index and cardiovascular disease-adjusted hazard ratios and their 95% confidence intervals for type 2 diabetes predicted by testosterone, free testosterone, sex hormone-binding globulin, luteinizing hormone, and testosterone/luteinizing hormone were estimated. A total of 30 new cases of type 2 diabetes developed during the follow-up period. After adjustment, only higher levels of testosterone (hazard ratio for one-unit increase 0.93, 95% confidence interval 0.87-0.99, P = 0.020) and free testosterone (hazard ratio for 10-unit increase 0.96, 95% confidence interval 0.91-1.00, P = 0.044) were associated with a lower risk of incident type 2 diabetes during the follow up. These associations (0.94, 95% confidence interval 0.87-1.00, P = 0.050 and 0.95, 95% confidence interval 0.90-1.00, P = 0.035, respectively) persisted even after additional adjustment of sex hormone-binding globulin. Higher levels of testosterone and free testosterone independently predicted a reduced risk of type 2 diabetes in the elderly men. © 2014 Japan Geriatrics Society.

Top