Sample records for confidence interval estimation

  1. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  2. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

  3. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  4. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  5. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  6. Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…

  7. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  8. Reducing the width of confidence intervals for the difference between two population means by inverting adaptive tests.

    PubMed

    O'Gorman, Thomas W

    2018-05-01

    In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.

  9. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  10. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  11. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2012-01-01

    Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

  12. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  13. Expression of Proteins Involved in Epithelial-Mesenchymal Transition as Predictors of Metastasis and Survival in Breast Cancer Patients

    DTIC Science & Technology

    2013-11-01

    Ptrend 0.78 0.62 0.75 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of node...Ptrend 0.71 0.67 Unconditional logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for risk of high-grade tumors... logistic regression was used to estimate odds ratios (OR) and 95 % confidence intervals (CI) for the associations between each of the seven SNPs and

  14. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  15. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  16. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  17. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  18. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  19. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    ERIC Educational Resources Information Center

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  20. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  1. Calculating Confidence Intervals for Regional Economic Impacts of Recreastion by Bootstrapping Visitor Expenditures

    Treesearch

    Donald B.K. English

    2000-01-01

    In this paper I use bootstrap procedures to develop confidence intervals for estimates of total industrial output generated per thousand tourist visits. Mean expenditures from replicated visitor expenditure data included weights to correct for response bias. Impacts were estimated with IMPLAN. Ninety percent interval endpoints were 6 to 16 percent above or below the...

  2. Association between GFR Estimated by Multiple Methods at Dialysis Commencement and Patient Survival

    PubMed Central

    Wong, Muh Geot; Pollock, Carol A.; Cooper, Bruce A.; Branley, Pauline; Collins, John F.; Craig, Jonathan C.; Kesselhut, Joan; Luxton, Grant; Pilmore, Andrew; Harris, David C.

    2014-01-01

    Summary Background and objectives The Initiating Dialysis Early and Late study showed that planned early or late initiation of dialysis, based on the Cockcroft and Gault estimation of GFR, was associated with identical clinical outcomes. This study examined the association of all-cause mortality with estimated GFR at dialysis commencement, which was determined using multiple formulas. Design, setting, participants, & measurements Initiating Dialysis Early and Late trial participants were stratified into tertiles according to the estimated GFR measured by Cockcroft and Gault, Modification of Diet in Renal Disease, or Chronic Kidney Disease-Epidemiology Collaboration formula at dialysis commencement. Patient survival was determined using multivariable Cox proportional hazards model regression. Results Only Initiating Dialysis Early and Late trial participants who commenced on dialysis were included in this study (n=768). A total of 275 patients died during the study. After adjustment for age, sex, racial origin, body mass index, diabetes, and cardiovascular disease, no significant differences in survival were observed between estimated GFR tertiles determined by Cockcroft and Gault (lowest tertile adjusted hazard ratio, 1.11; 95% confidence interval, 0.82 to 1.49; middle tertile hazard ratio, 1.29; 95% confidence interval, 0.96 to 1.74; highest tertile reference), Modification of Diet in Renal Disease (lowest tertile hazard ratio, 0.88; 95% confidence interval, 0.63 to 1.24; middle tertile hazard ratio, 1.20; 95% confidence interval, 0.90 to 1.61; highest tertile reference), and Chronic Kidney Disease-Epidemiology Collaboration equations (lowest tertile hazard ratio, 0.93; 95% confidence interval, 0.67 to 1.27; middle tertile hazard ratio, 1.15; 95% confidence interval, 0.86 to 1.54; highest tertile reference). Conclusion Estimated GFR at dialysis commencement was not significantly associated with patient survival, regardless of the formula used. However, a clinically important association cannot be excluded, because observed confidence intervals were wide. PMID:24178976

  3. Statistical inference for remote sensing-based estimates of net deforestation

    Treesearch

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  4. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing.

    PubMed

    Oono, Ryoko

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.

  5. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing

    PubMed Central

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889

  6. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  7. Towards the estimation of effect measures in studies using respondent-driven sampling.

    PubMed

    Rotondi, Michael A

    2014-06-01

    Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.

  8. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  9. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  10. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  11. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  12. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  13. Trends and racial and ethnic disparities in the prevalence of pregestational type 1 and type 2 diabetes in Northern California: 1996-2014.

    PubMed

    Peng, Tiffany Y; Ehrlich, Samantha F; Crites, Yvonne; Kitzmiller, John L; Kuzniewicz, Michael W; Hedderson, Monique M; Ferrara, Assiamira

    2017-02-01

    Despite concern for adverse perinatal outcomes in women with diabetes mellitus before pregnancy, recent data on the prevalence of pregestational type 1 and type 2 diabetes mellitus in the United States are lacking. The purpose of this study was to estimate changes in the prevalence of overall pregestational diabetes mellitus (all types) and pregestational type 1 and type 2 diabetes mellitus and to estimate whether changes varied by race-ethnicity from 1996-2014. We conducted a cohort study among 655,428 pregnancies at a Northern California integrated health delivery system from 1996-2014. Logistic regression analyses provided estimates of prevalence and trends. The age-adjusted prevalence (per 100 deliveries) of overall pregestational diabetes mellitus increased from 1996-1999 to 2012-2014 (from 0.58 [95% confidence interval, 0.54-0.63] to 1.06 [95% confidence interval, 1.00-1.12]; P trend <.0001). Significant increases occurred in all racial-ethnic groups; the largest relative increase was among Hispanic women (121.8% [95% confidence interval, 84.4-166.7]); the smallest relative increase was among non-Hispanic white women (49.6% [95% confidence interval, 27.5-75.4]). The age-adjusted prevalence of pregestational type 1 and type 2 diabetes mellitus increased from 0.14 (95% confidence interval, 0.12-0.16) to 0.23 (95% confidence interval, 0.21-0.27; P trend <.0001) and from 0.42 (95% confidence interval, 0.38-0.46) to 0.78 (95% confidence interval, 0.73-0.83; P trend <.0001), respectively. The greatest relative increase in the prevalence of type 1 diabetes mellitus was in non-Hispanic white women (118.4% [95% confidence interval, 70.0-180.5]), who had the lowest increases in the prevalence of type 2 diabetes mellitus (13.6% [95% confidence interval, -8.0 to 40.1]). The greatest relative increase in the prevalence of type 2 diabetes mellitus was in Hispanic women (125.2% [95% confidence interval, 84.8-174.4]), followed by African American women (102.0% [95% confidence interval, 38.3-194.3]) and Asian women (93.3% [95% confidence interval, 48.9-150.9]). The prevalence of overall pregestational diabetes mellitus and pregestational type 1 and type 2 diabetes mellitus increased from 1996-1999 to 2012-2014 and racial-ethnic disparities were observed, possibly because of differing prevalence of maternal obesity. Targeted prevention efforts, preconception care, and disease management strategies are needed to reduce the burden of diabetes mellitus and its sequelae. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    PubMed

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  15. Confidence intervals from single observations in forest research

    Treesearch

    Harry T. Valentine; George M. Furnival; Timothy G. Gregoire

    1991-01-01

    A procedure for constructing confidence intervals and testing hypothese from a single trial or observation is reviewed. The procedure requires a prior, fixed estimate or guess of the outcome of an experiment or sampling. Two examples of applications are described: a confidence interval is constructed for the expected outcome of a systematic sampling of a forested tract...

  16. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  17. Weighted regression analysis and interval estimators

    Treesearch

    Donald W. Seegrist

    1974-01-01

    A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.

  18. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  19. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  20. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  1. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  2. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    PubMed

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  3. Bullying and mental health and suicidal behaviour among 14- to 15-year-olds in a representative sample of Australian children.

    PubMed

    Ford, Rebecca; King, Tania; Priest, Naomi; Kavanagh, Anne

    2017-09-01

    To provide the first Australian population-based estimates of the association between bullying and adverse mental health outcomes and suicidality among Australian adolescents. Analysis of data from 3537 adolescents, aged 14-15 years from Wave 6 of the K-cohort of Longitudinal Study of Australian Children was conducted. We used Poisson and linear regression to estimate associations between bullying type (none, relational-verbal, physical, both types) and role (no role, victim, bully, victim and bully), and mental health (measured by the Strengths and Difficulties Questionnaire, symptoms of anxiety and depression) and suicidality. Adolescents involved in bullying had significantly increased Strengths and Difficulties Questionnaire, depression and anxiety scores in all bullying roles and types. In terms of self-harm and suicidality, bully-victims had the highest risk of self-harm (prevalence rate ratio 4.7, 95% confidence interval [3.26, 6.83]), suicidal ideation (prevalence rate ratio 4.3, 95% confidence interval [2.83, 6.49]), suicidal plan (prevalence rate ratio 4.1, 95% confidence interval [2.54, 6.58]) and attempts (prevalence rate ratio 2.7, 95% confidence interval [1.39, 5.13]), followed by victims then bullies. The experience of both relational-verbal and physical bullying was associated with the highest risk of self-harm (prevalence rate ratio 4.6, 95% confidence interval [3.15, 6.60]), suicidal ideation or plans (prevalence rate ratio 4.6, 95% confidence interval [3.05, 6.95]; and 4.8, 95% confidence interval [3.01, 7.64], respectively) or suicide attempts (prevalence rate ratio 3.5, 95% confidence interval [1.90, 6.30]). This study presents the first national, population-based estimates of the associations between bullying by peers and mental health outcomes in Australian adolescents. The markedly increased risk of poor mental health outcomes, self-harm and suicidal ideation and behaviours among adolescents who experienced bullying highlights the importance of addressing bullying in school settings.

  4. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  5. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  6. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  7. Microcephaly Case Fatality Rate Associated with Zika Virus Infection in Brazil: Current Estimates.

    PubMed

    Cunha, Antonio José Ledo Alves da; de Magalhães-Barbosa, Maria Clara; Lima-Setta, Fernanda; Medronho, Roberto de Andrade; Prata-Barbosa, Arnaldo

    2017-05-01

    Considering the currently confirmed cases of microcephaly and related deaths associated with Zika virus in Brazil, the estimated case fatality rate is 8.3% (95% confidence interval: 7.2-9.6). However, a third of the reported cases remain under investigation. If the confirmation rates of cases and deaths are the same in the future, the estimated case fatality rate will be as high as 10.5% (95% confidence interval: 9.5-11.7).

  8. Aortic stiffness and the balance between cardiac oxygen supply and demand: the Rotterdam Study.

    PubMed

    Guelen, Ilja; Mattace-Raso, Francesco Us; van Popele, Nicole M; Westerhof, Berend E; Hofman, Albert; Witteman, Jacqueline Cm; Bos, Willem Jan W

    2008-06-01

    Aortic stiffness is an independent predictor of cardiovascular morbidity and mortality. We investigated whether aortic stiffness, estimated as aortic pulse wave velocity, is associated with decreased perfusion pressure estimated as the cardiac oxygen supply potential. Aortic stiffness and aortic pressure waves, reconstructed from finger blood pressure waves, were obtained in 2490 older adults within the framework of the Rotterdam Study, a large population-based study. Cardiac oxygen supply and demand were estimated using pulse wave analysis techniques, and related to aortic stiffness by linear regression analyses after adjustment for age, sex, mean arterial pressure and heart rate. Cardiac oxygen demand, estimated as the Systolic Pressure Time Index and the Rate Pressure Product, increased with increasing aortic stiffness [0.27 mmHg s (95% confidence interval: 0.21; 0.34)] and [42.2 mmHg/min (95% confidence interval: 34.1; 50.3)], respectively. Cardiac oxygen supply potential estimated as the Diastolic Pressure Time Index decreased [-0.70 mmHg s (95% confidence interval: -0.86; -0.54)] with aortic stiffening. Accordingly, the supply/demand ratio Diastolic Pressure Time Index/Systolic Pressure Time Index -1.11 (95% confidence interval: -0.14; -0.009) decreased with increasing aortic stiffness. Aortic stiffness is associated with estimates of increased cardiac oxygen demand and a decreased cardiac oxygen supply potential. These results may offer additional explanation for the relation between aortic stiffness and cardiovascular morbidity and mortality.

  9. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  10. The Role of Short-Term Memory Capacity and Task Experience for Overconfidence in Judgment under Uncertainty

    ERIC Educational Resources Information Center

    Hansson, Patrik; Juslin, Peter; Winman, Anders

    2008-01-01

    Research with general knowledge items demonstrates extreme overconfidence when people estimate confidence intervals for unknown quantities, but close to zero overconfidence when the same intervals are assessed by probability judgment. In 3 experiments, the authors investigated if the overconfidence specific to confidence intervals derives from…

  11. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  12. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  13. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  14. Robust Coefficients Alpha and Omega and Confidence Intervals With Outlying Observations and Missing Data: Methods and Software.

    PubMed

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-06-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.

  15. Robust Coefficients Alpha and Omega and Confidence Intervals With Outlying Observations and Missing Data

    PubMed Central

    Zhang, Zhiyong; Yuan, Ke-Hai

    2015-01-01

    Cronbach’s coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald’s omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega. PMID:29795870

  16. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    PubMed

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  17. Rapid Contour-based Segmentation for 18F-FDG PET Imaging of Lung Tumors by Using ITK-SNAP: Comparison to Expert-based Segmentation.

    PubMed

    Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel

    2018-04-03

    Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.

  18. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  19. Uncertainty and inferred reserve estimates; the 1995 National Assessment

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, Timothy C.

    2003-01-01

    Inferred reserves are expected additions to proved reserves of oil and gas fields discovered as of a certain date. Inferred reserves accounted for 65 percent of the total oil and 34 percent of the total gas assessed in the U.S. Geological Survey's 1995 National Assessment of oil and gas in onshore and State offshore areas. The assessment predicted that over the 80-year period from 1992 through 2071, the sizes of pre-1992 discoveries in the lower 48 onshore and State offshore areas will increase by 48 billion barrels of oil (BBO) and 313 trillion cubic feet of wet gas (TCF). At that time, only point estimates were reported. This study presents a scheme to compute confidence intervals for these estimates. The recentered 90 percent confidence interval for the estimated inferred oil of 48 BBO is 25 BBO and 82 BBO. Similarly, the endpoints of the confidence interval about inferred reserve estimate of 313 TCF are 227 TCF and 439 TCF. The range of the estimates provides a basis for development of scenarios for projecting reserve additions and ultimately oil and gas production, information important to energy policy analysis.

  20. Overconfidence in Interval Estimates: What Does Expertise Buy You?

    ERIC Educational Resources Information Center

    McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan

    2008-01-01

    People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…

  1. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    NASA Astrophysics Data System (ADS)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  2. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  3. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.

    PubMed

    Erdoğan, Semra; Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.

  4. Impact of Time to Treatment Initiation in Patients with Human Papillomavirus-positive and -negative Oropharyngeal Squamous Cell Carcinoma.

    PubMed

    Grønhøj, C; Jensen, D; Dehlendorff, C; Nørregaard, C; Andersen, E; Specht, L; Charabi, B; von Buchwald, C

    2018-06-01

    The distinct difference in disease phenotype of human papillomavirus-positive (HPV+) and -negative (HPV-) oropharyngeal squamous cell cancer (OPSCC) patients might also be apparent when assessing the effect of time to treatment initiation (TTI). We assessed the overall survival and progression-free survival (PFS) effect from increasing TTI for HPV+ and HPV- OPSCC patients. We examined patients who received curative-intended therapy for OPSCC in eastern Denmark between 2000 and 2014. TTI was the number of days from diagnosis to the initiation of curative treatment. Overall survival and PFS were measured from the start of treatment and estimated with the Kaplan-Meier estimator. Hazard ratios and 95% confidence intervals were estimated with Cox proportional hazard regression. At a median follow-up of 3.6 years (interquartile range 1.86-6.07 years), 1177 patients were included (59% HPV+). In the adjusted analysis for the HPV+ and HPV- patient population, TTI influenced overall survival and PFS, most evident in the HPV- group, where TTI >60 days statistically significantly influenced overall survival but not PFS (overall survival: hazard ratio 1.60; 95% confidence interval 1.04-2.45; PFS: hazard ratio 1.46; 95% confidence interval 0.96-2.22). For patients with a TTI >60 days in the HPV+ group, TTI affected overall survival and PFS similarly, with slightly lower hazard ratio estimates of 1.44 (95% confidence interval 0.83-2.51) and 1.15 (95% confidence interval 0.70-1.88), respectively. For patients treated for a HPV+ or HPV- OPSCC, TTI affects outcome, with the strongest effect for overall survival among HPV- patients. Reducing TTI is an important tool to improve the prognosis. Copyright © 2018. Published by Elsevier Ltd.

  5. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  6. Proportion of general factor variance in a hierarchical multiple-component measuring instrument: a note on a confidence interval estimation procedure.

    PubMed

    Raykov, Tenko; Zinbarg, Richard E

    2011-05-01

    A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.

  7. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  8. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  9. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  10. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  11. Behavior, passage, and downstream migration of juvenile Chinook salmon from Detroit Reservoir to Portland, Oregon, 2014–15

    USGS Publications Warehouse

    Kock, Tobias J.; Beeman, John W.; Hansen, Amy C.; Hansel, Hal C.; Hansen, Gabriel S.; Hatton, Tyson W.; Kofoot, Eric E.; Sholtis, Matthew D.; Sprando, Jamie M.

    2015-11-16

    A Cormack-Jolly-Seber mark-recapture model was developed to provide reach-specific survival estimates for juvenile Chinook salmon. A portion of the tagged population overwintered in the Willamette River Basin and outmigrated several months after release. As a result, survival estimates from the model would have been negatively biased by factors such as acoustic tag failure and tag loss. Data from laboratory studies were incorporated into the model to provide survival estimates that accounted for these factors. In the North Santiam River between Minto Dam and the Bennett Dam complex, a distance of 37.2 kilometers, survival was estimated to be 0.844 (95-percent confidence interval 0.795–0.893). The survival estimate for the 203.7 kilometer reach between the Bennett Dam complex and Portland, Oregon, was 0.279 (95-percent confidence interval 0.234–0.324), and included portions of the North Santiam, Santiam, and Willamette Rivers. The cumulative survival estimate in the 240.9 kilometer reach from the Minto Dam tailrace to Portland was 0.236 (95-percent confidence interval 0.197–0.275).

  12. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  13. Procedures for estimating confidence intervals for selected method performance parameters.

    PubMed

    McClure, F D; Lee, J K

    2001-01-01

    Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.

  14. A comparison of statistical methods for evaluating matching performance of a biometric identification device: a preliminary report

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.; Hawley, Anne; Livingstone, Katie; Mramba, Nona

    2004-08-01

    Confidence intervals are an important way to assess and estimate a parameter. In the case of biometric identification devices, several approaches to confidence intervals for an error rate have been proposed. Here we evaluate six of these methods. To complete this evaluation, we simulate data from a wide variety of parameter values. This data are simulated via a correlated binary distribution. We then determine how well these methods do at what they say they do: capturing the parameter inside the confidence interval. In addition, the average widths of the various confidence intervals are recorded for each set of parameters. The complete results of this simulation are presented graphically for easy comparison. We conclude by making a recommendation regarding which method performs best.

  15. Refusal bias in HIV prevalence estimates from nationally representative seroprevalence surveys.

    PubMed

    Reniers, Georges; Eaton, Jeffrey

    2009-03-13

    To assess the relationship between prior knowledge of one's HIV status and the likelihood to refuse HIV testing in populations-based surveys and explore its potential for producing bias in HIV prevalence estimates. Using longitudinal survey data from Malawi, we estimate the relationship between prior knowledge of HIV-positive status and subsequent refusal of an HIV test. We use that parameter to develop a heuristic model of refusal bias that is applied to six Demographic and Health Surveys, in which refusal by HIV status is not observed. The model only adjusts for refusal bias conditional on a completed interview. Ecologically, HIV prevalence, prior testing rates and refusal for HIV testing are highly correlated. Malawian data further suggest that amongst individuals who know their status, HIV-positive individuals are 4.62 (95% confidence interval, 2.60-8.21) times more likely to refuse testing than HIV-negative ones. On the basis of that parameter and other inputs from the Demographic and Health Surveys, our model predicts downward bias in national HIV prevalence estimates ranging from 1.5% (95% confidence interval, 0.7-2.9) for Senegal to 13.3% (95% confidence interval, 7.2-19.6) for Malawi. In absolute terms, bias in HIV prevalence estimates is negligible for Senegal but 1.6 (95% confidence interval, 0.8-2.3) percentage points for Malawi. Downward bias is more severe in urban populations. Because refusal rates are higher in men, seroprevalence surveys also tend to overestimate the female-to-male ratio of infections. Prior knowledge of HIV status informs decisions to participate in seroprevalence surveys. Informed refusals may produce bias in estimates of HIV prevalence and the sex ratio of infections.

  16. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Missing doses in the life span study of Japanese atomic bomb survivors.

    PubMed

    Richardson, David B; Wing, Steve; Cole, Stephen R

    2013-03-15

    The Life Span Study of atomic bomb survivors is an important source of risk estimates used to inform radiation protection and compensation. Interviews with survivors in the 1950s and 1960s provided information needed to estimate radiation doses for survivors proximal to ground zero. Because of a lack of interview or the complexity of shielding, doses are missing for 7,058 of the 68,119 proximal survivors. Recent analyses excluded people with missing doses, and despite the protracted collection of interview information necessary to estimate some survivors' doses, defined start of follow-up as October 1, 1950, for everyone. We describe the prevalence of missing doses and its association with mortality, distance from hypocenter, city, age, and sex. Missing doses were more common among Nagasaki residents than among Hiroshima residents (prevalence ratio = 2.05; 95% confidence interval: 1.96, 2.14), among people who were closer to ground zero than among those who were far from it, among people who were younger at enrollment than among those who were older, and among males than among females (prevalence ratio = 1.22; 95% confidence interval: 1.17, 1.28). Missing dose was associated with all-cancer and leukemia mortality, particularly during the first years of follow-up (all-cancer rate ratio = 2.16, 95% confidence interval: 1.51, 3.08; and leukemia rate ratio = 4.28, 95% confidence interval: 1.72, 10.67). Accounting for missing dose and late entry should reduce bias in estimated dose-mortality associations.

  18. Missing Doses in the Life Span Study of Japanese Atomic Bomb Survivors

    PubMed Central

    Richardson, David B.; Wing, Steve; Cole, Stephen R.

    2013-01-01

    The Life Span Study of atomic bomb survivors is an important source of risk estimates used to inform radiation protection and compensation. Interviews with survivors in the 1950s and 1960s provided information needed to estimate radiation doses for survivors proximal to ground zero. Because of a lack of interview or the complexity of shielding, doses are missing for 7,058 of the 68,119 proximal survivors. Recent analyses excluded people with missing doses, and despite the protracted collection of interview information necessary to estimate some survivors' doses, defined start of follow-up as October 1, 1950, for everyone. We describe the prevalence of missing doses and its association with mortality, distance from hypocenter, city, age, and sex. Missing doses were more common among Nagasaki residents than among Hiroshima residents (prevalence ratio = 2.05; 95% confidence interval: 1.96, 2.14), among people who were closer to ground zero than among those who were far from it, among people who were younger at enrollment than among those who were older, and among males than among females (prevalence ratio = 1.22; 95% confidence interval: 1.17, 1.28). Missing dose was associated with all-cancer and leukemia mortality, particularly during the first years of follow-up (all-cancer rate ratio = 2.16, 95% confidence interval: 1.51, 3.08; and leukemia rate ratio = 4.28, 95% confidence interval: 1.72, 10.67). Accounting for missing dose and late entry should reduce bias in estimated dose-mortality associations. PMID:23429722

  19. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  20. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  1. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  2. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  3. Air pollution attributable postneonatal infant mortality in U.S. metropolitan areas: a risk assessment study

    PubMed Central

    Kaiser, Reinhard; Romieu, Isabelle; Medina, Sylvia; Schwartz, Joel; Krzyzanowski, Michal; Künzli, Nino

    2004-01-01

    Background The impact of outdoor air pollution on infant mortality has not been quantified. Methods Based on exposure-response functions from a U.S. cohort study, we assessed the attributable risk of postneonatal infant mortality in 23 U.S. metropolitan areas related to particulate matter <10 μm in diameter (PM10) as a surrogate of total air pollution. Results The estimated proportion of all cause mortality, sudden infant death syndrome (normal birth weight infants only) and respiratory disease mortality (normal birth weight) attributable to PM10 above a chosen reference value of 12.0 μg/m3 PM10 was 6% (95% confidence interval 3–11%), 16% (95% confidence interval 9–23%) and 24% (95% confidence interval 7–44%), respectively. The expected number of infant deaths per year in the selected areas was 106 (95% confidence interval 53–185), 79 (95% confidence interval 46–111) and 15 (95% confidence interval 5–27), respectively. Approximately 75% of cases were from areas where the current levels are at or below the new U.S. PM2.5 standard of 15 μg/m3 (equivalent to 25 μg/m3 PM10). In a country where infant mortality rates and air pollution levels are relatively low, ambient air pollution as measured by particulate matter contributes to a substantial fraction of infant death, especially for those due to sudden infant death syndrome and respiratory disease. Even if all counties would comply to the new PM2.5 standard, the majority of the estimated burden would remain. Conclusion Given the inherent limitations of risk assessments, further studies are needed to support and quantify the relationship between infant mortality and air pollution. PMID:15128459

  4. Return on Investment of a Work-Family Intervention: Evidence From the Work, Family, and Health Network.

    PubMed

    Barbosa, Carolina; Bray, Jeremy W; Dowd, William N; Mills, Michael J; Moen, Phyllis; Wipfli, Brad; Olson, Ryan; Kelly, Erin L

    2015-09-01

    To estimate the return on investment (ROI) of a workplace initiative to reduce work-family conflict in a group-randomized 18-month field experiment in an information technology firm in the United States. Intervention resources were micro-costed; benefits included medical costs, productivity (presenteeism), and turnover. Regression models were used to estimate the ROI, and cluster-robust bootstrap was used to calculate its confidence interval. For each participant, model-adjusted costs of the intervention were $690 and company savings were $1850 (2011 prices). The ROI was 1.68 (95% confidence interval, -8.85 to 9.47) and was robust in sensitivity analyses. The positive ROI indicates that employers' investment in an intervention to reduce work-family conflict can enhance their business. Although this was the first study to present a confidence interval for the ROI, results are comparable with the literature.

  5. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  6. Risk factors for low birth weight according to the multiple logistic regression model. A retrospective cohort study in José María Morelos municipality, Quintana Roo, Mexico.

    PubMed

    Franco Monsreal, José; Tun Cobos, Miriam Del Ruby; Hernández Gómez, José Ricardo; Serralta Peraza, Lidia Esther Del Socorro

    2018-01-17

    Low birth weight has been an enigma for science over time. There have been many researches on its causes and its effects. Low birth weight is an indicator that predicts the probability of a child surviving. In fact, there is an exponential relationship between weight deficit, gestational age, and perinatal mortality. Multiple logistic regression is one of the most expressive and versatile statistical instruments available for the analysis of data in both clinical and epidemiology settings, as well as in public health. To assess in a multivariate fashion the importance of 17 independent variables in low birth weight (dependent variable) of children born in the Mayan municipality of José María Morelos, Quintana Roo, Mexico. Analytical observational epidemiological cohort study with retrospective temporality. Births that met the inclusion criteria occurred in the "Hospital Integral Jose Maria Morelos" of the Ministry of Health corresponding to the Maya municipality of Jose Maria Morelos during the period from August 1, 2014 to July 31, 2015. The total number of newborns recorded was 1,147; 84 of which (7.32%) had low birth weight. To estimate the independent association between the explanatory variables (potential risk factors) and the response variable, a multiple logistic regression analysis was performed using the IBM SPSS Statistics 22 software. In ascending numerical order values of odds ratio > 1 indicated the positive contribution of explanatory variables or possible risk factors: "unmarried" marital status (1.076, 95% confidence interval: 0.550 to 2.104); age at menarche ≤ 12 years (1.08, 95% confidence interval: 0.64 to 1.84); history of abortion(s) (1.14, 95% confidence interval: 0.44 to 2.93); maternal weight < 50 kg (1.51, 95% confidence interval: 0.83 to 2.76); number of prenatal consultations ≤ 5 (1.86, 95% confidence interval: 0.94 to 3.66); maternal age ≥ 36 years (3.5, 95% confidence interval: 0.40 to 30.47); maternal age ≤ 19 years (3.59, 95% confidence interval: 0.43 to 29.87); number of deliveries = 1 (3.86, 95% confidence interval: 0.33 to 44.85); personal pathological history (4.78, 95% confidence interval: 2.16 to 10.59); pathological obstetric history (5.01, 95% confidence interval: 1.66 to 15.18); maternal height < 150 cm (5.16, 95% confidence interval: 3.08 to 8.65); number of births ≥ 5 (5.99, 95% confidence interval: 0.51 to 69.99); and smoking (15.63, 95% confidence interval: 1.07 to 227.97). Four of the independent variables (personal pathological history, obstetric pathological history, maternal stature <150 centimeters and smoking) showed a significant positive contribution, thus they can be considered as clear risk factors for low birth weight. The use of the logistic regression model in the Mayan municipality of José María Morelos, will allow estimating the probability of low birth weight for each pregnant woman in the future, which will be useful for the health authorities of the region.

  7. Stability of INFIT and OUTFIT Compared to Simulated Estimates in Applied Setting.

    PubMed

    Hodge, Kari J; Morgan, Grant B

    Residual-based fit statistics are commonly used as an indication of the extent to which the item response data fit the Rash model. Fit statistic estimates are influenced by sample size and rules-of thumb estimates may result in incorrect conclusions about the extent to which the model fits the data. Estimates obtained in this analysis were compared to 250 simulated data sets to examine the stability of the estimates. All INFIT estimates were within the rule-of-thumb range of 0.7 to 1.3. However, only 82% of the INFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's INFIT distributions using this 95% confidence-like interval. This is a 18 percentage point difference in items that were classified as acceptable. Fourty-eight percent of OUTFIT estimates fell within the 0.7 to 1.3 rule- of-thumb range. Whereas 34% of OUTFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's OUTFIT distributions. This is a 13 percentage point difference in items that were classified as acceptable. When using the rule-of- thumb ranges for fit estimates the magnitude of misfit was smaller than with the 95% confidence interval of the simulated distribution. The findings indicate that the use of confidence intervals as critical values for fit statistics leads to different model data fit conclusions than traditional rule of thumb critical values.

  8. Estimation of aquifer scale proportion using equal area grids: assessment of regional scale groundwater quality

    USGS Publications Warehouse

    Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.

    2010-01-01

    The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.

  9. Preconceptional and prenatal supplementary folic acid and multivitamin intake and autism spectrum disorders.

    PubMed

    Virk, Jasveer; Liew, Zeyan; Olsen, Jørn; Nohr, Ellen A; Catov, Janet M; Ritz, Beate

    2016-08-01

    To evaluate whether early folic acid supplementation during pregnancy prevents diagnosis of autism spectrum disorders in offspring. Information on autism spectrum disorder diagnosis was obtained from the National Hospital Register and the Central Psychiatric Register. We estimated risk ratios for autism spectrum disorders for children whose mothers took folate or multivitamin supplements from 4 weeks prior from the last menstrual period through to 8 weeks after the last menstrual period (-4 to 8 weeks) by three 4-week periods. We did not find an association between early folate or multivitamin intake for autism spectrum disorder (folic acid-adjusted risk ratio: 1.06, 95% confidence interval: 0.82-1.36; multivitamin-adjusted risk ratio: 1.00, 95% confidence interval: 0.82-1.22), autistic disorder (folic acid-adjusted risk ratio: 1.18, 95% confidence interval: 0.76-1.84; multivitamin-adjusted risk ratio: 1.22, 95% confidence interval: 0.87-1.69), Asperger's syndrome (folic acid-adjusted risk ratio: 0.85, 95% confidence interval: 0.46-1.53; multivitamin-adjusted risk ratio: 0.95, 95% confidence interval: 0.62-1.46), or pervasive developmental disorder-not otherwise specified (folic acid-adjusted risk ratio: 1.07, 95% confidence interval: 0.75-1.54; multivitamin: adjusted risk ratio: 0.87, 95% confidence interval: 0.65-1.17) compared with women reporting no supplement use in the same period. We did not find any evidence to corroborate previous reports of a reduced risk for autism spectrum disorders in offspring of women using folic acid supplements in early pregnancy. © The Author(s) 2015.

  10. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  11. Prevalence Estimates of Complicated Syphilis.

    PubMed

    Dombrowski, Julia C; Pedersen, Rolf; Marra, Christina M; Kerani, Roxanne P; Golden, Matthew R

    2015-12-01

    We reviewed 68 cases of possible neurosyphilis among 573 syphilis cases in King County, WA, from 3rd January 2012 to 30th September 2013; 7.9% (95% confidence interval, 5.8%-10.5%) had vision or hearing changes, and 3.5% (95% confidence interval, 2.2%-5.4%) had both symptoms and objective confirmation of complicated syphilis with either abnormal cerebrospinal fluid or an abnormal ophthalmologic examination.

  12. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  13. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  14. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  15. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  16. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  17. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  18. Determinants of Quality of Interview and Impact on Risk Estimates in a Case-Control Study of Bladder Cancer

    PubMed Central

    Silverman, Debra T.; Malats, Núria; Tardon, Adonina; Garcia-Closas, Reina; Serra, Consol; Carrato, Alfredo; Fortuny, Joan; Rothman, Nathaniel; Dosemeci, Mustafa; Kogevinas, Manolis

    2009-01-01

    The authors evaluated potential determinants of the quality of the interview in a case-control study of bladder cancer and assessed the effect of the interview quality on the risk estimates. The analysis included 1,219 incident bladder cancer cases and 1,271 controls recruited in Spain in 1998–2001. Information on etiologic factors for bladder cancer was collected through personal interviews, which were scored as unsatisfactory, questionable, reliable, or high quality by the interviewers. Eight percent of the interviews were unsatisfactory or questionable. Increasing age, lower socioeconomic status, and poorer self-perceived health led to higher proportions of questionable or unreliable interviews. The odds ratio for cigarette smoking, the main risk factor for bladder cancer, was 6.18 (95% confidence interval: 4.56, 8.39) overall, 3.20 (95% confidence interval: 1.13, 9.04) among unsatisfactory or questionable interviews, 6.86 (95% confidence interval: 4.80, 9.82) among reliable interviews, and 7.70 (95% confidence interval: 3.64, 16.30) among high-quality interviews. Similar trends were observed for employment in high-risk occupations, drinking water containing elevated levels of trihalomethanes, and use of analgesics. Higher quality interviews led to stronger associations compared with risk estimation that did not take the quality of interview into account. The collection of quality of interview scores and the exclusion of unreliable interviews probably reduce misclassification of exposure in observational studies. PMID:19478234

  19. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  20. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  1. Estimated Magnitudes and Recurrence Intervals of Peak Flows on the Mousam and Little Ossipee Rivers for the Flood of April 2007 in Southern Maine

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Stewart, Gregory J.; Cohn, Timothy A.; Dudley, Robert W.

    2007-01-01

    Large amounts of rain fell on southern Maine from the afternoon of April 15, 2007, to the afternoon of April 16, 2007, causing substantial damage to houses, roads, and culverts. This report provides an estimate of the peak flows on two rivers in southern Maine--the Mousam River and the Little Ossipee River--because of their severe flooding. The April 2007 estimated peak flow of 9,230 ft3/s at the Mousam River near West Kennebunk had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 25 years to greater than 500 years. The April 2007 estimated peak flow of 8,220 ft3/s at the Little Ossipee River near South Limington had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 50 years to greater than 500 years.

  2. speed-ne: Software to simulate and estimate genetic effective population size (Ne ) from linkage disequilibrium observed in single samples.

    PubMed

    Hamilton, Matthew B; Tartakovsky, Maria; Battocletti, Amy

    2018-05-01

    The genetic effective population size, N e , can be estimated from the average gametic disequilibrium (r2^) between pairs of loci, but such estimates require evaluation of assumptions and currently have few methods to estimate confidence intervals. speed-ne is a suite of matlab computer code functions to estimate Ne^ from r2^ with a graphical user interface and a rich set of outputs that aid in understanding data patterns and comparing multiple estimators. speed-ne includes functions to either generate or input simulated genotype data to facilitate comparative studies of Ne^ estimators under various population genetic scenarios. speed-ne was validated with data simulated under both time-forward and time-backward coalescent models of genetic drift. Three classes of estimators were compared with simulated data to examine several general questions: what are the impacts of microsatellite null alleles on Ne^, how should missing data be treated, and does disequilibrium contributed by reduced recombination among some loci in a sample impact Ne^. Estimators differed greatly in precision in the scenarios examined, and a widely employed Ne^ estimator exhibited the largest variances among replicate data sets. speed-ne implements several jackknife approaches to estimate confidence intervals, and simulated data showed that jackknifing over loci and jackknifing over individuals provided ~95% confidence interval coverage for some estimators and should be useful for empirical studies. speed-ne provides an open-source extensible tool for estimation of Ne^ from empirical genotype data and to conduct simulations of both microsatellite and single nucleotide polymorphism (SNP) data types to develop expectations and to compare Ne^ estimators. © 2018 John Wiley & Sons Ltd.

  3. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    ERIC Educational Resources Information Center

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  4. Evaluating the Impact of Guessing and Its Interactions With Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    PubMed Central

    Paek, Insu

    2015-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test characteristics on four confidence interval (CI) procedures for coefficient alpha in terms of coverage rate (CR), length, and the degree of asymmetry of CI estimates. In addition, interval estimates of coefficient alpha when data follow the essentially tau-equivalent condition were investigated as a supplement to the case of dichotomous data with examinee guessing. For dichotomous data with guessing, the results did not reveal salient negative effects of guessing and its interactions with other test characteristics (sample size, test length, coefficient alpha levels) on CR and the degree of asymmetry, but the effect of guessing was salient as a main effect and an interaction effect with sample size on the length of the CI estimates, making longer CI estimates as guessing increases, especially when combined with a small sample size. Other important effects (e.g., CI procedures on CR) are also discussed. PMID:29795863

  5. Inverse modeling with RZWQM2 to predict water quality

    USGS Publications Warehouse

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  6. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  7. Maternal and neonatal outcomes of antenatal anemia in a Scottish population: a retrospective cohort study.

    PubMed

    Rukuni, Ruramayi; Bhattacharya, Sohinee; Murphy, Michael F; Roberts, David; Stanworth, Simon J; Knight, Marian

    2016-05-01

    Antenatal anemia is a major public health problem in the UK, yet there is limited high quality evidence for associated poor clinical outcomes. The objectives of this study were to estimate the incidence and clinical outcomes of antenatal anemia in a Scottish population. A retrospective cohort study of 80 422 singleton pregnancies was conducted using data from the Aberdeen Maternal and Neonatal Databank between 1995 and 2012. Antenatal anemia was defined as haemoglobin ≤ 10 g/dl during pregnancy. Incidence was calculated with 95% confidence intervals and compared over time using a chi-squared test for trend. Multivariable logistic regression was used to adjust for confounding variables. Results are presented as adjusted odds ratios with 95% confidence interval. The overall incidence of antenatal anemia was 9.3 cases/100 singleton pregnancies (95% confidence interval 9.1-9.5), decreasing from 16.9/100 to 4.1/100 singleton pregnancies between 1995 and 2012 (p < 0.001). Maternal anemia was associated with antepartum hemorrhage (adjusted odds ratio 1.26, 95% confidence interval 1.17-1.36), postpartum infection (adjusted odds ratio 1.89, 95% confidence interval 1.39-2.57), transfusion (adjusted odds ratio 1.87, 95% confidence interval 1.65-2.13) and stillbirth (adjusted odds ratio 1.42, 95% confidence interval 1.04-1.94), reduced odds of postpartum hemorrhage (adjusted odds ratio 0.92, 95% confidence interval 0.86-0.98) and low birthweight (adjusted odds ratio 0.77, 95% confidence interval 0.69-0.86). No other outcomes were statistically significant. This study shows the incidence of antenatal anemia is decreasing steadily within this Scottish population. However, given that anemia is a readily correctable risk factor for major causes of morbidity and mortality in the UK, further work is required to investigate appropriate preventive measures. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  8. Uncertainty analysis for absorbed dose from a brain receptor imaging agent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aydogan, B.; Miller, L.F.; Sparks, R.B.

    Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less

  9. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  10. Statin therapy in lower limb peripheral arterial disease: Systematic review and meta-analysis.

    PubMed

    Antoniou, George A; Fisher, Robert K; Georgiadis, George S; Antoniou, Stavros A; Torella, Francesco

    2014-11-01

    To investigate and analyse the existing evidence supporting statin therapy in patients with lower limb atherosclerotic arterial disease. A systematic search of electronic information sources was undertaken to identify studies comparing cardiovascular outcomes in patients with lower limb peripheral arterial disease treated with a statin and those not receiving a statin. Estimates were combined applying fixed- or random-effects models. Twelve observational cohort studies and two randomised trials reporting 19,368 patients were selected. Statin therapy was associated with reduced all-cause mortality (odds ratio 0.60, 95% confidence interval 0.46-0.78) and incidence of stroke (odds ratio 0.77, 95% confidence interval 0.67-0.89). A trend towards improved cardiovascular mortality (odds ratio 0.62, 95% confidence interval 0.35-1.11), myocardial infarction (odds ratio 0.62, 95% confidence interval 0.38-1.01), and the composite of death/myocardial infarction/stroke (odds ratio 0.91, 95% confidence interval 0.81-1.03), was identified. Meta-analyses of studies performing adjustments showed decreased all-cause mortality in statin users (hazard ratio 0.77, 95% confidence interval 0.68-0.86). Evidence supporting statins' protective role in patients with lower limb peripheral arterial disease is insufficient. Statin therapy seems to be effective in reducing all-cause mortality and the incidence cerebrovascular events in patients diagnosed with peripheral arterial disease. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  12. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  13. Uncertainty of exploitation estimates made from tag returns

    USGS Publications Warehouse

    Miranda, L.E.; Brock, R.E.; Dorr, B.S.

    2002-01-01

    Over 6,000 crappies Pomoxis spp. were tagged in five water bodies to estimate exploitation rates by anglers. Exploitation rates were computed as the percentage of tags returned after adjustment for three sources of uncertainty: postrelease mortality due to the tagging process, tag loss, and the reporting rate of tagged fish. Confidence intervals around exploitation rates were estimated by resampling from the probability distributions of tagging mortality, tag loss, and reporting rate. Estimates of exploitation rates ranged from 17% to 54% among the five study systems. Uncertainty around estimates of tagging mortality, tag loss, and reporting resulted in 90% confidence intervals around the median exploitation rate as narrow as 15 percentage points and as broad as 46 percentage points. The greatest source of estimation error was uncertainty about tag reporting. Because the large investments required by tagging and reward operations produce imprecise estimates of the exploitation rate, it may be worth considering other approaches to estimating it or simply circumventing the exploitation question altogether.

  14. Estimating numbers of greater prairie-chickens using mark-resight techniques

    USGS Publications Warehouse

    Clifton, A.M.; Krementz, D.G.

    2006-01-01

    Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.

  15. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman filter (EKF) can be used as an integrated adaptive learning and confidence interval estimation algorithm for neural networks, with fast convergence and small confidence intervals. However, EKF learning is computationally expensive because it involves high dimensional matrix manipulations. A modified U-D factorization within the decoupled EKF (DEKF-UD) framework is developed in this research. The computational efficiency and numerical stability are significantly improved.

  16. Dietary acid, age, and serum bicarbonate levels among adults in the United States.

    PubMed

    Amodu, Afolarin; Abramowitz, Matthew K

    2013-12-01

    Greater dietary acid has been associated with lower serum bicarbonate levels in patients with CKD. Whether this association extends to the general population and if it is modified by age are unknown. This study examined the association of the dietary acid load, estimated by net endogenous acid production, with serum bicarbonate levels in adult participants in the National Health and Nutrition Examination Survey 1999-2004. The mean serum bicarbonate was 24.9 mEq/L (SEM=0.1), and the mean estimated net endogenous acid production was 57.4 mEq/d (SEM=0.4). Serum bicarbonate was linearly associated with age, such that the oldest participants had the highest serum bicarbonate levels. After multivariable adjustment, participants in the highest quartile of net endogenous acid production had 0.40 mEq/L (95% confidence interval, -0.55 to -0.26) lower serum bicarbonate and a 33% (95% confidence interval, 3 to 72) higher likelihood of acidosis compared with those participants in the lowest quartile. There was a significant interaction by age of the association of net endogenous acid production with serum bicarbonate (P=0.005). Among participants 20-39, 40-59, and ≥60 years old, those participants in the highest net endogenous acid production quartile had 0.26 (95% confidence interval, -0.49 to -0.03), 0.60 (95% confidence interval, -0.92 to -0.29), and 0.49 (95% confidence interval, -0.84 to -0.14) mEq/L lower serum bicarbonate, respectively, compared with participants in the lowest quartile. Greater dietary acid is associated with lower serum bicarbonate in the general US population, and the magnitude of this association is greater among middle-aged and elderly persons than younger adults.

  17. Measuring coverage in MNCH: total survey error and the interpretation of intervention coverage estimates from household surveys.

    PubMed

    Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.

  18. Measuring Coverage in MNCH: Total Survey Error and the Interpretation of Intervention Coverage Estimates from Household Surveys

    PubMed Central

    Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331

  19. Evaluating the Impact of Guessing and Its Interactions with Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    ERIC Educational Resources Information Center

    Paek, Insu

    2016-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…

  20. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    ERIC Educational Resources Information Center

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  1. Uncertainty of chromatic dispersion estimation from transmitted waveforms in direct detection systems

    NASA Astrophysics Data System (ADS)

    Lach, Zbigniew T.

    2017-08-01

    A possibility is shown of a non-disruptive estimation of chromatic dispersion in a fiber of an intensity modulation communication line under work conditions. Uncertainty of the chromatic dispersion estimates is analyzed and quantified with the use of confidence intervals.

  2. Efficient bootstrap estimates for tail statistics

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Aarnes, Ole Johan

    2017-03-01

    Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.

  3. Hazard ratio estimation and inference in clinical trials with many tied event times.

    PubMed

    Mehrotra, Devan V; Zhang, Yiwei

    2018-06-13

    The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Study design and sampling intensity for demographic analyses of bear populations

    USGS Publications Warehouse

    Harris, R.B.; Schwartz, C.C.; Mace, R.D.; Haroldson, M.A.

    2011-01-01

    The rate of population change through time (??) is a fundamental element of a wildlife population's conservation status, yet estimating it with acceptable precision for bears is difficult. For studies that follow known (usually marked) bears, ?? can be estimated during some defined time by applying either life-table or matrix projection methods to estimates of individual vital rates. Usually however, confidence intervals surrounding the estimate are broader than one would like. Using an estimator suggested by Doak et al. (2005), we explored the precision to be expected in ?? from demographic analyses of typical grizzly (Ursus arctos) and American black (U. americanus) bear data sets. We also evaluated some trade-offs among vital rates in sampling strategies. Confidence intervals around ?? were more sensitive to adding to the duration of a short (e.g., 3 yrs) than a long (e.g., 10 yrs) study, and more sensitive to adding additional bears to studies with small (e.g., 10 adult females/yr) than large (e.g., 30 adult females/yr) sample sizes. Confidence intervals of ?? projected using process-only variance of vital rates were only slightly smaller than those projected using total variances of vital rates. Under sampling constraints typical of most bear studies, it may be more efficient to invest additional resources into monitoring recruitment and juvenile survival rates of females already a part of the study, than to simply increase the sample size of study females. ?? 2011 International Association for Bear Research and Management.

  5. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  6. Associations between maternal periconceptional exposure to secondhand tobacco smoke and major birth defects.

    PubMed

    Hoyt, Adrienne T; Canfield, Mark A; Romitti, Paul A; Botto, Lorenzo D; Anderka, Marlene T; Krikov, Sergey V; Tarpey, Morgan K; Feldkamp, Marcia L

    2016-11-01

    While associations between secondhand smoke and a few birth defects (namely, oral clefts and neural tube defects) have been noted in the scientific literature, to our knowledge, there is no single or comprehensive source of population-based information on its associations with a range of birth defects among nonsmoking mothers. We utilized data from the National Birth Defects Prevention Study, a large population-based multisite case-control study, to examine associations between maternal reports of periconceptional exposure to secondhand smoke in the household or workplace/school and major birth defects. The multisite National Birth Defects Prevention Study is the largest case-control study of birth defects to date in the United States. We selected cases from birth defect groups having >100 total cases, as well as all nonmalformed controls (10,200), from delivery years 1997 through 2009; 44 birth defects were examined. After excluding cases and controls from multiple births and whose mothers reported active smoking or pregestational diabetes, we analyzed data on periconceptional secondhand smoke exposure-encompassing the period 1 month prior to conception through the first trimester. For the birth defect craniosynostosis, we additionally examined the effect of exposure in the second and third trimesters as well due to the potential sensitivity to teratogens for this defect throughout pregnancy. Covariates included in all final models of birth defects with ≥5 exposed mothers were study site, previous live births, time between estimated date of delivery and interview date, maternal age at estimated date of delivery, race/ethnicity, education, body mass index, nativity, household income divided by number of people supported by this income, periconceptional alcohol consumption, and folic acid supplementation. For each birth defect examined, we used logistic regression analyses to estimate both crude and adjusted odds ratios and 95% confidence intervals for both isolated and total case groups for various sources of exposure (household only; workplace/school only; household and workplace/school; household or workplace/school). The prevalence of secondhand smoke exposure only across all sources ranged from 12.9-27.8% for cases and 14.5-15.8% for controls. The adjusted odds ratios for any vs no secondhand smoke exposure in the household or workplace/school and isolated birth defects were significantly elevated for neural tube defects (anencephaly: adjusted odds ratio, 1.66; 95% confidence interval, 1.22-2.25; and spina bifida: adjusted odds ratio, 1.49; 95% confidence interval, 1.20-1.86); orofacial clefts (cleft lip without cleft palate: adjusted odds ratio, 1.41; 95% confidence interval, 1.10-1.81; cleft lip with or without cleft palate: adjusted odds ratio, 1.24; 95% confidence interval, 1.05-1.46; cleft palate alone: adjusted odds ratio, 1.31; 95% confidence interval, 1.06-1.63); bilateral renal agenesis (adjusted odds ratio, 1.99; 95% confidence interval, 1.05-3.75); amniotic band syndrome-limb body wall complex (adjusted odds ratio, 1.66; 95% confidence interval, 1.10-2.51); and atrial septal defects, secundum (adjusted odds ratio, 1.37; 95% confidence interval, 1.09-1.72). There were no significant inverse associations observed. Additional studies replicating the findings are needed to better understand the moderate positive associations observed between periconceptional secondhand smoke and several birth defects in this analysis. Increased odds ratios resulting from chance (eg, multiple comparisons) or recall bias cannot be ruled out. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Knowledge level of effect size statistics, confidence intervals and meta-analysis in Spanish academic psychologists.

    PubMed

    Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor

    2016-11-01

    The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.

  8. The Logic of Summative Confidence

    ERIC Educational Resources Information Center

    Gugiu, P. Cristian

    2007-01-01

    The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…

  9. Pregnancy and birth outcomes in couples with infertility with and without assisted reproductive technology: with an emphasis on US population-based studies.

    PubMed

    Luke, Barbara

    2017-09-01

    Infertility, defined as the inability to conceive within 1 year of unprotected intercourse, affects an estimated 80 million individuals worldwide, or 10-15% of couples of reproductive age. Assisted reproductive technology includes all infertility treatments to achieve conception; in vitro fertilization is the process by which an oocyte is fertilized by semen outside the body; non-in vitro fertilization assisted reproductive technology treatments include ovulation induction, artificial insemination, and intrauterine insemination. Use of assisted reproductive technology has risen steadily in the United States during the past 2 decades due to several reasons, including childbearing at older maternal ages and increasing insurance coverage. The number of in vitro fertilization cycles in the United States has nearly doubled from 2000 through 2013 and currently 1.7% of all live births in the United States are the result of this technology. Since the birth of the first child from in vitro fertilization >35 years ago, >5 million babies have been born from in vitro fertilization, half within the past 6 years. It is estimated that 1% of singletons, 19% of twins, and 25% of triplet or higher multiples are due to in vitro fertilization, and 4%, 21%, and 52%, respectively, are due to non-in vitro fertilization assisted reproductive technology. Higher plurality at birth results in a >10-fold increase in the risks for prematurity and low birthweight in twins vs singletons (adjusted odds ratio, 11.84; 95% confidence interval, 10.56-13.27 and adjusted odds ratio, 10.68; 95% confidence interval, 9.45-12.08, respectively). The use of donor oocytes is associated with increased risks for pregnancy-induced hypertension (adjusted odds ratio, 1.43; 95% confidence interval, 1.14-1.78) and prematurity (adjusted odds ratio, 1.43; 95% confidence interval, 1.11-1.83). The use of thawed embryos is associated with higher risks for pregnancy-induced hypertension (adjusted odds ratio, 1.30; 95% confidence interval, 1.08-1.57) and large-for-gestation birthweight (adjusted odds ratio, 1.74; 95% confidence interval, 1.45-2.08). Among singletons, in vitro fertilization is associated with increased risk of severe maternal morbidity compared with fertile deliveries (vaginal: adjusted odds ratio, 2.27; 95% confidence interval, 1.78-2.88; cesarean: adjusted odds ratio, 1.67; 95% confidence interval, 1.40-1.98, respectively) and subfertile deliveries (vaginal: adjusted odds ratio, 1.97; 95% confidence interval, 1.30-3.00; cesarean: adjusted odds ratio, 1.75; 95% confidence interval, 1.30-2.35, respectively). Among twins, cesarean in vitro fertilization deliveries have significantly greater severe maternal morbidity compared to cesarean fertile deliveries (adjusted odds ratio, 1.48; 95% confidence interval, 1.14-1.93). Subfertility, with or without in vitro fertilization or non-in vitro fertilization infertility treatments to achieve a pregnancy, is associated with increased risks of adverse maternal and perinatal outcomes. The major risk from in vitro fertilization treatments of multiple births (and the associated excess of perinatal morbidity) has been reduced over time, with fewer and better-quality embryos being transferred. Copyright © 2017. Published by Elsevier Inc.

  10. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  11. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-06-27

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.

  12. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    PubMed

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Usual Physical Activity and Hip Fracture in Older Men: An Application of Semiparametric Methods to Observational Data

    PubMed Central

    Mackey, Dawn C.; Hubbard, Alan E.; Cawthon, Peggy M.; Cauley, Jane A.; Cummings, Steven R.; Tager, Ira B.

    2011-01-01

    Few studies have examined the relation between usual physical activity level and rate of hip fracture in older men or applied semiparametric methods from the causal inference literature that estimate associations without assuming a particular parametric model. Using the Physical Activity Scale for the Elderly, the authors measured usual physical activity level at baseline (2000–2002) in 5,682 US men ≥65 years of age who were enrolled in the Osteoporotic Fractures in Men Study. Physical activity levels were classified as low (bottom quartile of Physical Activity Scale for the Elderly score), moderate (middle quartiles), or high (top quartile). Hip fractures were confirmed by central review. Marginal associations between physical activity and hip fracture were estimated with 3 estimation methods: inverse probability-of-treatment weighting, G-computation, and doubly robust targeted maximum likelihood estimation. During 6.5 years of follow-up, 95 men (1.7%) experienced a hip fracture. The unadjusted risk of hip fracture was lower in men with a high physical activity level versus those with a low physical activity level (relative risk = 0.51, 95% confidence interval: 0.28, 0.92). In semiparametric analyses that controlled confounding, hip fracture risk was not lower with moderate (e.g., targeted maximum likelihood estimation relative risk = 0.92, 95% confidence interval: 0.62, 1.44) or high (e.g., targeted maximum likelihood estimation relative risk = 0.88, 95% confidence interval: 0.53, 2.03) physical activity relative to low. This study does not support a protective effect of usual physical activity on hip fracture in older men. PMID:21303805

  14. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

    ERIC Educational Resources Information Center

    Andersson, Björn; Xin, Tao

    2018-01-01

    In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

  16. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  17. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  18. Confidence intervals for the first crossing point of two hazard functions.

    PubMed

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  19. Estimating age at a specified length from the von Bertalanffy growth function

    USGS Publications Warehouse

    Ogle, Derek H.; Isermann, Daniel A.

    2017-01-01

    Estimating the time required (i.e., age) for fish in a population to reach a specific length (e.g., legal harvest length) is useful for understanding population dynamics and simulating the potential effects of length-based harvest regulations. The age at which a population reaches a specific mean length is typically estimated by fitting a von Bertalanffy growth function to length-at-age data and then rearranging the best-fit equation to solve for age at the specified length. This process precludes the use of standard frequentist methods to compute confidence intervals and compare estimates of age at the specified length among populations. We provide a parameterization of the von Bertalanffy growth function that has age at a specified length as a parameter. With this parameterization, age at a specified length is directly estimated, and standard methods can be used to construct confidence intervals and make among-group comparisons for this parameter. We demonstrate use of the new parameterization with two data sets.

  20. Improved Margin of Error Estimates for Proportions in Business: An Educational Example

    ERIC Educational Resources Information Center

    Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael

    2015-01-01

    This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…

  1. Population-attributable fraction of tubal factor infertility associated with chlamydia.

    PubMed

    Gorwitz, Rachel J; Wiesenfeld, Harold C; Chen, Pai-Lien; Hammond, Karen R; Sereday, Karen A; Haggerty, Catherine L; Johnson, Robert E; Papp, John R; Kissin, Dmitry M; Henning, Tara C; Hook, Edward W; Steinkampf, Michael P; Markowitz, Lauri E; Geisler, William M

    2017-09-01

    Chlamydia trachomatis infection is highly prevalent among young women in the United States. Prevention of long-term sequelae of infection, including tubal factor infertility, is a primary goal of chlamydia screening and treatment activities. However, the population-attributable fraction of tubal factor infertility associated with chlamydia is unclear, and optimal measures for assessing tubal factor infertility and prior chlamydia in epidemiological studies have not been established. Black women have increased rates of chlamydia and tubal factor infertility compared with White women but have been underrepresented in prior studies of the association of chlamydia and tubal factor infertility. The objectives of the study were to estimate the population-attributable fraction of tubal factor infertility associated with Chlamydia trachomatis infection by race (Black, non-Black) and assess how different definitions of Chlamydia trachomatis seropositivity and tubal factor infertility affect population-attributable fraction estimates. We conducted a case-control study, enrolling infertile women attending infertility practices in Birmingham, AL, and Pittsburgh, PA, during October 2012 through June 2015. Tubal factor infertility case status was primarily defined by unilateral or bilateral fallopian tube occlusion (cases) or bilateral fallopian tube patency (controls) on hysterosalpingogram. Alternate tubal factor infertility definitions incorporated history suggestive of tubal damage or were based on laparoscopic evidence of tubal damage. We aimed to enroll all eligible women, with an expected ratio of 1 and 3 controls per case for Black and non-Black women, respectively. We assessed Chlamydia trachomatis seropositivity with a commercial assay and a more sensitive research assay; our primary measure of seropositivity was defined as positivity on either assay. We estimated Chlamydia trachomatis seropositivity and calculated Chlamydia trachomatis-tubal factor infertility odds ratios and population-attributable fraction, stratified by race. We enrolled 107 Black women (47 cases, 60 controls) and 620 non-Black women (140 cases, 480 controls). Chlamydia trachomatis seropositivity by either assay was 81% (95% confidence interval, 73-89%) among Black and 31% (95% confidence interval, 28-35%) among non-Black participants (P < .001). Using the primary Chlamydia trachomatis seropositivity and tubal factor infertility definitions, no significant association was detected between chlamydia and tubal factor infertility among Blacks (odds ratio, 1.22, 95% confidence interval, 0.45-3.28) or non-Blacks (odds ratio, 1.41, 95% confidence interval, 0.95-2.09), and the estimated population-attributable fraction was 15% (95% confidence interval, -97% to 68%) among Blacks and 11% (95% confidence interval, -3% to 23%) among non-Blacks. Use of alternate serological measures and tubal factor infertility definitions had an impact on the magnitude of the chlamydia-tubal factor infertility association and resulted in a significant association among non-Blacks. Low population-attributable fraction estimates suggest factors in addition to chlamydia contribute to tubal factor infertility in the study population. However, high background Chlamydia trachomatis seropositivity among controls, most striking among Black participants, could have obscured an association with tubal factor infertility and resulted in a population-attributable fraction that underestimates the true etiological role of chlamydia. Choice of chlamydia and tubal factor infertility definitions also has an impact on the odds ratio and population-attributable fraction estimates. Published by Elsevier Inc.

  2. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Comparative Diagnostic Performance of Ultrasonography and 99mTc-Sestamibi Scintigraphy for Parathyroid Adenoma in Primary Hyperparathyroidism; Systematic Review and Meta- Analysis

    PubMed

    Nafisi Moghadam, Reza; Amlelshahbaz, Amir Pasha; Namiranian, Nasim; Sobhan-Ardekani, Mohammad; Emami-Meybodi, Mahmood; Dehghan, Ali; Rahmanian, Masoud; Razavi-Ratki, Seid Kazem

    2017-12-28

    Objective: Ultrasonography (US) and parathyroid scintigraphy (PS) with 99mTc-MIBI are common methods for preoperative localization of parathyroid adenomas but there discrepancies exist with regard to diagnostic accuracy. The aim of the study was to compare PS and US for localization of parathyroid adenoma with a systematic review and meta-analysis of the literature. Methods: Pub Med, Scopus (EMbase), Web of Science and the reference lists of all included studies were searched up to 1st January 2016. The search strategy was according PICO characteristics. Heterogeneity between the studies was accounted by P < 0.1. Point estimates were pooled estimate of sensitivity, specificity and positive predictive value of SPECT and ultrasonography with 99% confidence intervals (CIs) by pooling available data. Data analysis was performed using Meta-DiSc software (version 1.4). Results: Among 188 studies and after deletion of duplicated studies (75), a total of 113 titles and abstracts were studied. From these, 12 studies were selected. The meta-analysis determined a pooled sensitivity for scintigraphy of 83% [99% confidence interval (CI) 96.358 -97.412] and for ultra-sonography of 80% [99% confidence interval (CI) 76-83]. Similar results for specificity were also obtained for both approache. Conclusion: According this meta- analysis, there were no significant differences between the two methods in terms of sensitivity and specificity. There were overlaps in 99% confidence intervals. Also features of the two methods are similar. Creative Commons Attribution License

  4. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  5. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  6. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  7. Optimal Measurement Interval for Emergency Department Crowding Estimation Tools.

    PubMed

    Wang, Hao; Ojha, Rohit P; Robinson, Richard D; Jackson, Bradford E; Shaikh, Sajid A; Cowden, Chad D; Shyamanand, Rath; Leuck, JoAnna; Schrader, Chet D; Zenarosa, Nestor R

    2017-11-01

    Emergency department (ED) crowding is a barrier to timely care. Several crowding estimation tools have been developed to facilitate early identification of and intervention for crowding. Nevertheless, the ideal frequency is unclear for measuring ED crowding by using these tools. Short intervals may be resource intensive, whereas long ones may not be suitable for early identification. Therefore, we aim to assess whether outcomes vary by measurement interval for 4 crowding estimation tools. Our eligible population included all patients between July 1, 2015, and June 30, 2016, who were admitted to the JPS Health Network ED, which serves an urban population. We generated 1-, 2-, 3-, and 4-hour ED crowding scores for each patient, using 4 crowding estimation tools (National Emergency Department Overcrowding Scale [NEDOCS], Severely Overcrowded, Overcrowded, and Not Overcrowded Estimation Tool [SONET], Emergency Department Work Index [EDWIN], and ED Occupancy Rate). Our outcomes of interest included ED length of stay (minutes) and left without being seen or eloped within 4 hours. We used accelerated failure time models to estimate interval-specific time ratios and corresponding 95% confidence limits for length of stay, in which the 1-hour interval was the reference. In addition, we used binomial regression with a log link to estimate risk ratios (RRs) and corresponding confidence limit for left without being seen. Our study population comprised 117,442 patients. The time ratios for length of stay were similar across intervals for each crowding estimation tool (time ratio=1.37 to 1.30 for NEDOCS, 1.44 to 1.37 for SONET, 1.32 to 1.27 for EDWIN, and 1.28 to 1.23 for ED Occupancy Rate). The RRs of left without being seen differences were also similar across intervals for each tool (RR=2.92 to 2.56 for NEDOCS, 3.61 to 3.36 for SONET, 2.65 to 2.40 for EDWIN, and 2.44 to 2.14 for ED Occupancy Rate). Our findings suggest limited variation in length of stay or left without being seen between intervals (1 to 4 hours) regardless of which of the 4 crowding estimation tools were used. Consequently, 4 hours may be a reasonable interval for assessing crowding with these tools, which could substantially reduce the burden on ED personnel by requiring less frequent assessment of crowding. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  8. On Latent Change Model Choice in Longitudinal Studies

    ERIC Educational Resources Information Center

    Raykov, Tenko; Zajacova, Anna

    2012-01-01

    An interval estimation procedure for proportion of explained observed variance in latent curve analysis is discussed, which can be used as an aid in the process of choosing between linear and nonlinear models. The method allows obtaining confidence intervals for the R[squared] indexes associated with repeatedly followed measures in longitudinal…

  9. Estimating the Distribution of the Incubation Periods of Human Avian Influenza A(H7N9) Virus Infections.

    PubMed

    Virlogeux, Victor; Li, Ming; Tsang, Tim K; Feng, Luzhao; Fang, Vicky J; Jiang, Hui; Wu, Peng; Zheng, Jiandong; Lau, Eric H Y; Cao, Yu; Qin, Ying; Liao, Qiaohong; Yu, Hongjie; Cowling, Benjamin J

    2015-10-15

    A novel avian influenza virus, influenza A(H7N9), emerged in China in early 2013 and caused severe disease in humans, with infections occurring most frequently after recent exposure to live poultry. The distribution of A(H7N9) incubation periods is of interest to epidemiologists and public health officials, but estimation of the distribution is complicated by interval censoring of exposures. Imputation of the midpoint of intervals was used in some early studies, resulting in estimated mean incubation times of approximately 5 days. In this study, we estimated the incubation period distribution of human influenza A(H7N9) infections using exposure data available for 229 patients with laboratory-confirmed A(H7N9) infection from mainland China. A nonparametric model (Turnbull) and several parametric models accounting for the interval censoring in some exposures were fitted to the data. For the best-fitting parametric model (Weibull), the mean incubation period was 3.4 days (95% confidence interval: 3.0, 3.7) and the variance was 2.9 days; results were very similar for the nonparametric Turnbull estimate. Under the Weibull model, the 95th percentile of the incubation period distribution was 6.5 days (95% confidence interval: 5.9, 7.1). The midpoint approximation for interval-censored exposures led to overestimation of the mean incubation period. Public health observation of potentially exposed persons for 7 days after exposure would be appropriate. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  11. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  12. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  13. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  14. Marginal Structural Models for Case-Cohort Study Designs to Estimate the Association of Antiretroviral Therapy Initiation With Incident AIDS or Death

    PubMed Central

    Cole, Stephen R.; Hudgens, Michael G.; Tien, Phyllis C.; Anastos, Kathryn; Kingsley, Lawrence; Chmiel, Joan S.; Jacobson, Lisa P.

    2012-01-01

    To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding. PMID:22302074

  15. Estimating high-risk cannabis and opiate use in Ankara, Istanbul and Izmir.

    PubMed

    Kraus, Ludwig; Hay, Gordon; Richardson, Clive; Yargic, Ilhan; Ilhan, Mustafa Necmi; Ay, Pinar; Karasahin, Füsun; Pinarci, Mustafa; Tuncoglu, Tolga; Piontek, Daniela; Schulte, Bernd

    2017-09-01

    Information on high-risk drug use in Turkey, particularly at the regional level, is lacking. The present analysis aims at estimating high-risk cannabis use (HRCU) and high-risk opiate use (HROU) in the cities of Ankara, Istanbul and Izmir. Capture-recapture and multiplier methods were applied based on treatment and police data stratified by age and gender in the years 2009 and 2010. Case definitions refer to ICD-10 cannabis (F.12) and opiate (F.11) disorder diagnoses from outpatient and inpatient treatment records and illegal possession of these drugs as recorded by the police. High-risk cannabis use was estimated at 28 500 (8.5 per 1000; 95% confidence interval 7.3-10.3) and 33 400 (11.9 per 1000; 95% confidence interval 10.7-13.5) in Ankara and Izmir, respectively. Using multipliers based on capture-recapture estimates for Izmir, HRCU in Istanbul was estimated up to 166 000 (18.0 per 1000; range: 2.8-18.0). Capture-recapture estimates of HROU resulted in 4800 (1.4 per 1000; 95% confidence interval 0.9-1.9) in Ankara and multipliers based on these gave estimates up to 20 000 (2.2 per 1000; range: 0.9-2.2) in Istanbul. HROU in Izmir was not estimated due to the low absolute numbers of opiate users. While HRCU prevalence in both Ankara and Izmir was considerably lower in comparison to an estimate for Berlin, the rate for Istanbul was only slightly lower. Compared with the majority of European cities, HROU in these three Turkish cities may be considered rather low. [Kraus L, Hay G, Richardson C, Yargic I, Ilhan N M, Ay P, Karasahin F, Pinarci M, Tuncoglu T, Piontek D, Schulte B Estimating high-risk cannabis and opiate use in Ankara, Istanbul and Izmir Drug Alcohol Rev 2016;00:000-000]. © 2016 Australasian Professional Society on Alcohol and other Drugs.

  16. Assessing equity of healthcare utilization in rural China: results from nationally representative surveys from 1993 to 2008

    PubMed Central

    2013-01-01

    Background The phenomenon of inequitable healthcare utilization in rural China interests policymakers and researchers; however, the inequity has not been actually measured to present the magnitude and trend using nationally representative data. Methods Based on the National Health Service Survey (NHSS) in 1993, 1998, 2003, and 2008, the Probit model with the probability of outpatient visit and the probability of inpatient visit as the dependent variables is applied to estimate need-predicted healthcare utilization. Furthermore, need-standardized healthcare utilization is assessed through indirect standardization method. Concentration index is measured to reflect income-related inequity of healthcare utilization. Results The concentration index of need-standardized outpatient utilization is 0.0486[95% confidence interval (0.0399, 0.0574)], 0.0310[95% confidence interval (0.0229, 0.0390)], 0.0167[95% confidence interval (0.0069, 0.0264)] and −0.0108[95% confidence interval (−0.0213, -0.0004)] in 1993, 1998, 2003 and 2008, respectively. For inpatient service, the concentration index is 0.0529[95% confidence interval (0.0349, 0.0709)], 0.1543[95% confidence interval (0.1356, 0.1730)], 0.2325[95% confidence interval (0.2132, 0.2518)] and 0.1313[95% confidence interval (0.1174, 0.1451)] in 1993, 1998, 2003 and 2008, respectively. Conclusions Utilization of both outpatient and inpatient services was pro-rich in rural China with the exception of outpatient service in 2008. With the same needs for healthcare, rich rural residents utilized more healthcare service than poor rural residents. Compared to utilization of outpatient service, utilization of inpatient service was more inequitable. Inequity of utilization of outpatient service reduced gradually from 1993 to 2008; meanwhile, inequity of inpatient service utilization increased dramatically from 1993 to 2003 and decreased significantly from 2003 to 2008. Recent attempts in China to increase coverage of insurance and primary healthcare could be a contributing factor to counteract the inequity of outpatient utilization, but better benefit packages and delivery strategies still need to be tested and scaled up to reduce future inequity in inpatient utilization in rural China. PMID:23688260

  17. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  18. Accounting for dropout bias using mixed-effects models.

    PubMed

    Mallinckrodt, C H; Clark, W S; David, S R

    2001-01-01

    Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.

  19. Evaluating the utility of hexapod species for calculating a confidence interval about a succession based postmortem interval estimate.

    PubMed

    Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D

    2014-08-01

    Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Fluorine 18 fluorodeoxyglucose PET/CT volume-based indices in locally advanced non-small cell lung cancer: prediction of residual viable tumor after induction chemotherapy.

    PubMed

    Soussan, Michael; Cyrta, Joanna; Pouliquen, Christelle; Chouahnia, Kader; Orlhac, Fanny; Martinod, Emmanuel; Eder, Véronique; Morère, Jean-François; Buvat, Irène

    2014-09-01

    To study whether volume-based indices of fluorine 18 fluorodeoxyglucose positron emission tomographic (PET)/computed tomographic (CT) imaging is an accurate tool to predict the amount of residual viable tumor after induction chemotherapy in patients with locally advanced non-small cell lung cancer (NSCLC). This study was approved by institutional review board with waivers of informed consent. Twenty-two patients with locally advanced NSCLC underwent surgery after induction chemotherapy. All had pre- and posttreatment FDG PET/CT scans. CT largest diameter, CT volume, maximum standardized uptake value (SUVmax), mean SUV (SUVmean), metabolic tumor volume (TV), and total lesion glycolysis of primary tumor were calculated. Changes in tumor measurements were determined by dividing follow-up by baseline measurement (ratio index). Amounts of residual viable tumor, necrosis, fibrous tissue, inflammatory infiltrate, and Ki-67 proliferative index were estimated on resected tumor. Correlations between imaging indices and histologic parameters were estimated by using Spearman correlation coefficients or Mann-Whitney tests. No baseline or posttreatment indices correlated with percentage of residual viable tumor. TV ratio was the only index that correlated with percentage of residual viable tumor (r = 0.61 [95% confidence interval: 0.24, 0.81]; P = .003). Conversely, SUVmax and SUVmean ratios were only indices correlated with Ki-67 (r = 0.62 [95% confidence interval: 0.24, 0.82]; P = .003; and r = 0.60 [95% confidence interval: 0.21, 0.81]; P = .004, respectively). Total lesion glycolysis ratio was moderately correlated with residual viable tumor (r = 0.53 [95% confidence interval: 0.13, 0.78]; P = .01) and with Ki-67 (r = 0.57 [95% confidence interval: 0.18, 0.80]; P = .006). No ratios were correlated with presence of inflammatory infiltrate or foamy macrophages. TV and total lesion glycolysis ratios were the only indices correlated with residual viable tumor after induction chemotherapy in locally advanced NSCLC.

  1. Dietary Acid, Age, and Serum Bicarbonate Levels among Adults in the United States

    PubMed Central

    Amodu, Afolarin

    2013-01-01

    Summary Background and objectives Greater dietary acid has been associated with lower serum bicarbonate levels in patients with CKD. Whether this association extends to the general population and if it is modified by age are unknown. Design, setting, participants, & measurements This study examined the association of the dietary acid load, estimated by net endogenous acid production, with serum bicarbonate levels in adult participants in the National Health and Nutrition Examination Survey 1999–2004. Results The mean serum bicarbonate was 24.9 mEq/L (SEM=0.1), and the mean estimated net endogenous acid production was 57.4 mEq/d (SEM=0.4). Serum bicarbonate was linearly associated with age, such that the oldest participants had the highest serum bicarbonate levels. After multivariable adjustment, participants in the highest quartile of net endogenous acid production had 0.40 mEq/L (95% confidence interval, −0.55 to −0.26) lower serum bicarbonate and a 33% (95% confidence interval, 3 to 72) higher likelihood of acidosis compared with those participants in the lowest quartile. There was a significant interaction by age of the association of net endogenous acid production with serum bicarbonate (P=0.005). Among participants 20–39, 40–59, and ≥60 years old, those participants in the highest net endogenous acid production quartile had 0.26 (95% confidence interval, −0.49 to −0.03), 0.60 (95% confidence interval, −0.92 to −0.29), and 0.49 (95% confidence interval, −0.84 to −0.14) mEq/L lower serum bicarbonate, respectively, compared with participants in the lowest quartile. Conclusion Greater dietary acid is associated with lower serum bicarbonate in the general US population, and the magnitude of this association is greater among middle-aged and elderly persons than younger adults. PMID:24052219

  2. Variation in polyp size estimation among endoscopists and impact on surveillance intervals.

    PubMed

    Chaptini, Louis; Chaaya, Adib; Depalma, Fedele; Hunter, Krystal; Peikin, Steven; Laine, Loren

    2014-10-01

    Accurate estimation of polyp size is important because it is used to determine the surveillance interval after polypectomy. To evaluate the variation and accuracy in polyp size estimation among endoscopists and the impact on surveillance intervals after polypectomy. Web-based survey. A total of 873 members of the American Society for Gastrointestinal Endoscopy. Participants watched video recordings of 4 polypectomies and were asked to estimate the polyp sizes. Proportion of participants with polyp size estimates within 20% of the correct measurement and the frequency of incorrect surveillance intervals based on inaccurate size estimates. Polyp size estimates were within 20% of the correct value for 1362 (48%) of 2812 estimates (range 39%-59% for the 4 polyps). Polyp size was overestimated by >20% in 889 estimates (32%, range 15%-49%) and underestimated by >20% in 561 (20%, range 4%-46%) estimates. Incorrect surveillance intervals because of overestimation or underestimation occurred in 272 (10%) of the 2812 estimates (range 5%-14%). Participants in a private practice setting overestimated the size of 3 or of all 4 polyps by >20% more often than participants in an academic setting (difference = 7%; 95% confidence interval, 1%-11%). Survey design with the use of video clips. Substantial overestimation and underestimation of polyp size occurs with visual estimation leading to incorrect surveillance intervals in 10% of cases. Our findings support routine use of measurement tools to improve polyp size estimates. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  3. New Approaches to Robust Confidence Intervals for Location: A Simulation Study.

    DTIC Science & Technology

    1984-06-01

    obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined

  4. Demography and population status of polar bears in western Hudson Bay

    USGS Publications Warehouse

    Lunn, Nicholas J.; Regher, Eric V; Servanty, Sabrina; Converse, Sarah J.; Richardson, Evan S.; Stirling, Ian

    2013-01-01

    The 2011 abundance estimate from this analysis was 806 bears with a 95% Bayesian credible interval of 653-984. This is lower than, but broadly consistent with, the abundance estimate of 1,030 (95% confidence interval = 745-1406) from a 2011 aerial survey (Stapleton et al. 2014). The capture-recapture and aerial survey approaches have different spatial and temporal coverage of the WH subpopulation and, consequently, the effective study population considered by each approach is different.

  5. Application of an Individual-Based Transmission Hazard Model for Estimation of Influenza Vaccine Effectiveness in a Household Cohort.

    PubMed

    Petrie, Joshua G; Eisenberg, Marisa C; Ng, Sophia; Malosh, Ryan E; Lee, Kyu Han; Ohmit, Suzanne E; Monto, Arnold S

    2017-12-15

    Household cohort studies are an important design for the study of respiratory virus transmission. Inferences from these studies can be improved through the use of mechanistic models to account for household structure and risk as an alternative to traditional regression models. We adapted a previously described individual-based transmission hazard (TH) model and assessed its utility for analyzing data from a household cohort maintained in part for study of influenza vaccine effectiveness (VE). Households with ≥4 individuals, including ≥2 children <18 years of age, were enrolled and followed during the 2010-2011 influenza season. VE was estimated in both TH and Cox proportional hazards (PH) models. For each individual, TH models estimated hazards of infection from the community and each infected household contact. Influenza A(H3N2) infection was laboratory-confirmed in 58 (4%) subjects. VE estimates from both models were similarly low overall (Cox PH: 20%, 95% confidence interval: -57, 59; TH: 27%, 95% credible interval: -23, 58) and highest for children <9 years of age (Cox PH: 40%, 95% confidence interval: -49, 76; TH: 52%, 95% credible interval: 7, 75). VE estimates were robust to model choice, although the ability of the TH model to accurately describe transmission of influenza presents continued opportunity for analyses. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data

    NASA Astrophysics Data System (ADS)

    Stegmeir, Matthew; Kassen, Dan

    2016-11-01

    As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.

  7. Overlap between treatment and control distributions as an effect size measure in experiments.

    PubMed

    Hedges, Larry V; Olkin, Ingram

    2016-03-01

    The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).

  8. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  9. Earth before life.

    PubMed

    Marzban, Caren; Viswanathan, Raju; Yurtsever, Ulvi

    2014-01-09

    A recent study argued, based on data on functional genome size of major phyla, that there is evidence life may have originated significantly prior to the formation of the Earth. Here a more refined regression analysis is performed in which 1) measurement error is systematically taken into account, and 2) interval estimates (e.g., confidence or prediction intervals) are produced. It is shown that such models for which the interval estimate for the time origin of the genome includes the age of the Earth are consistent with observed data. The appearance of life after the formation of the Earth is consistent with the data set under examination.

  10. The Use of One-Sample Prediction Intervals for Estimating CO2 Scrubber Canister Durations

    DTIC Science & Technology

    2012-10-01

    Grade and 812 D-Grade Sofnolime.3 Definitions According to Devore,4 A CI (confidence interval) refers to a parameter, or population ... characteristic , whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this

  11. Fasting glucose levels, incident diabetes, subclinical atherosclerosis and cardiovascular events in apparently healthy adults: A 12-year longitudinal study.

    PubMed

    Sitnik, Debora; Santos, Itamar S; Goulart, Alessandra C; Staniak, Henrique L; Manson, JoAnn E; Lotufo, Paulo A; Bensenor, Isabela M

    2016-11-01

    We aimed to study the association between fasting plasma glucose, diabetes incidence and cardiovascular burden after 10-12 years. We evaluated diabetes and cardiovascular events incidences, carotid intima-media thickness and coronary artery calcium scores in ELSA-Brasil (the Brazilian Longitudinal Study of Adult Health) baseline (2008-2010) of 1536 adults without diabetes in 1998. We used regression models to estimate association with carotid intima-media thickness (in mm), coronary artery calcium scores (in Agatston points) and cardiovascular events according to fasting plasma glucose in 1998. Adjusted diabetes incidence rate was 9.8/1000 person-years (95% confidence interval: 7.7-13.6/1000 person-years). Incident diabetes was positively associated with higher fasting plasma glucose. Fasting plasma glucose levels 110-125 mg/dL were associated with higher carotid intima-media thickness (β = 0.028; 95% confidence interval: 0.003-0.053). Excluding those with incident diabetes, there was a borderline association between higher carotid intima-media thickness and fasting plasma glucose 110-125 mg/dL (β = 0.030; 95% confidence interval: -0.005 to 0.065). Incident diabetes was associated with higher carotid intima-media thickness (β = 0.034; 95% confidence interval: 0.015-0.053), coronary artery calcium scores ⩾400 (odds ratio = 2.84; 95% confidence interval: 1.17-6.91) and the combined outcome of a coronary artery calcium scores ⩾400 or incident cardiovascular event (odds ratio = 3.50; 95% confidence interval: 1.60-7.65). In conclusion, fasting plasma glucose in 1998 and incident diabetes were associated with higher cardiovascular burden. © The Author(s) 2016.

  12. Effect of Lowering the Dialysate Temperature in Chronic Hemodialysis: A Systematic Review and Meta-Analysis

    PubMed Central

    Bdair, Fadi; Akl, Elie A.; Garg, Amit X.; Thiessen-Philbrook, Heather; Salameh, Hassan; Kisra, Sood; Nesrallah, Gihad; Al-Jaishi, Ahmad; Patel, Parth; Patel, Payal; Mustafa, Ahmad A.; Schünemann, Holger J.

    2016-01-01

    Background and objectives Lowering the dialysate temperature may improve outcomes for patients undergoing chronic hemodialysis. We reviewed the reported benefits and harms of lower temperature dialysis. Design, setting, participants, & measurements We searched the Cochrane Central Register, OVID MEDLINE, EMBASE, and Pubmed until April 15, 2015. We reviewed the reference lists of relevant reviews, registered trials, and relevant conference proceedings. We included all randomized, controlled trials that evaluated the effect of reduced temperature dialysis versus standard temperature dialysis in adult patients receiving chronic hemodialysis. We followed the Grading of Recommendations Assessment, Development and Evaluation approach to assess confidence in the estimates of effect (i.e., the quality of evidence). We conducted meta-analyses using random effects models. Results Twenty-six trials were included, consisting of a total of 484 patients. Compared with standard temperature dialysis, reduced temperature dialysis significantly reduced the rate of intradialytic hypotension by 70% (95% confidence interval, 49% to 89%) and significantly increased intradialytic mean arterial pressure by 12 mmHg (95% confidence interval, 8 to 16 mmHg). Symptoms of discomfort occurred 2.95 (95% confidence interval, 0.88 to 9.82) times more often with reduced temperature compared with standard temperature dialysis. The effect on dialysis adequacy was not significantly different, with a Kt/V mean difference of −0.05 (95% confidence interval, −0.09 to 0.01). Small sample sizes, loss to follow-up, and a lack of appropriate blinding in some trials reduced confidence in the estimates of effect. None of the trials reported long-term outcomes. Conclusions In patients receiving chronic hemodialysis, reduced temperature dialysis may reduce the rate of intradialytic hypotension and increase intradialytic mean arterial pressure. High–quality, large, multicenter, randomized trials are needed to determine whether reduced temperature dialysis affects patient mortality and major adverse cardiovascular events. PMID:26712807

  13. Statistical evaluation of time-dependent metabolite concentrations: estimation of post-mortem intervals based on in situ 1H-MRS of the brain.

    PubMed

    Scheurer, Eva; Ith, Michael; Dietrich, Daniel; Kreis, Roland; Hüsler, Jürg; Dirnhofer, Richard; Boesch, Chris

    2005-05-01

    Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs. Copyright 2004 John Wiley & Sons, Ltd.

  14. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    NASA Astrophysics Data System (ADS)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  15. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Potential confounding in the association between short birth intervals and increased neonatal, infant, and child mortality

    PubMed Central

    Perin, Jamie; Walker, Neff

    2015-01-01

    Background Recent steep declines in child mortality have been attributed in part to increased use of contraceptives and the resulting change in fertility behaviour, including an increase in the time between births. Previous observational studies have documented strong associations between short birth spacing and an increase in the risk of neonatal, infant, and under-five mortality, compared to births with longer preceding birth intervals. In this analysis, we compare two methods to estimate the association between short birth intervals and mortality risk to better inform modelling efforts linking family planning and mortality in children. Objectives Our goal was to estimate the mortality risk for neonates, infants, and young children by preceding birth space using household survey data, controlling for mother-level factors and to compare the results to those from previous analyses with survey data. Design We assessed the potential for confounding when estimating the relative mortality risk by preceding birth interval and estimated mortality risk by birth interval in four categories: less than 18 months, 18–23 months, 24–35 months, and 36 months or longer. We estimated the relative risks among women who were 35 and older at the time of the survey with two methods: in a Cox proportional hazards regression adjusting for potential confounders and also by stratifying Cox regression by mother, to control for all factors that remain constant over a woman's childbearing years. We estimated the overall effects for birth spacing in a meta-analysis with random survey effects. Results We identified several factors known for their associations with neonatal, infant, and child mortality that are also associated with preceding birth interval. When estimating the effect of birth spacing on mortality, we found that regression adjustment for these factors does not substantially change the risk ratio for short birth intervals compared to an unadjusted mortality ratio. For birth intervals less than 18 months, standard regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18–2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52–1.63), a decline of almost one-third in the effect on neonatal mortality. Conclusions Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality. PMID:26562139

  17. Potential confounding in the association between short birth intervals and increased neonatal, infant, and child mortality.

    PubMed

    Perin, Jamie; Walker, Neff

    2015-01-01

    Recent steep declines in child mortality have been attributed in part to increased use of contraceptives and the resulting change in fertility behaviour, including an increase in the time between births. Previous observational studies have documented strong associations between short birth spacing and an increase in the risk of neonatal, infant, and under-five mortality, compared to births with longer preceding birth intervals. In this analysis, we compare two methods to estimate the association between short birth intervals and mortality risk to better inform modelling efforts linking family planning and mortality in children. Our goal was to estimate the mortality risk for neonates, infants, and young children by preceding birth space using household survey data, controlling for mother-level factors and to compare the results to those from previous analyses with survey data. We assessed the potential for confounding when estimating the relative mortality risk by preceding birth interval and estimated mortality risk by birth interval in four categories: less than 18 months, 18-23 months, 24-35 months, and 36 months or longer. We estimated the relative risks among women who were 35 and older at the time of the survey with two methods: in a Cox proportional hazards regression adjusting for potential confounders and also by stratifying Cox regression by mother, to control for all factors that remain constant over a woman's childbearing years. We estimated the overall effects for birth spacing in a meta-analysis with random survey effects. We identified several factors known for their associations with neonatal, infant, and child mortality that are also associated with preceding birth interval. When estimating the effect of birth spacing on mortality, we found that regression adjustment for these factors does not substantially change the risk ratio for short birth intervals compared to an unadjusted mortality ratio. For birth intervals less than 18 months, standard regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18-2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52-1.63), a decline of almost one-third in the effect on neonatal mortality. Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality.

  18. Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models

    PubMed Central

    2014-01-01

    Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829

  19. Uncertainty in Population Growth Rates: Determining Confidence Intervals from Point Estimates of Parameters

    PubMed Central

    Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.

    2010-01-01

    Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049

  20. Epidemiology of 1.6 million pediatric soccer-related injuries presenting to US emergency departments from 1990 to 2003.

    PubMed

    Leininger, Robert E; Knox, Christy L; Comstock, R Dawn

    2007-02-01

    As soccer participation in the United States increases, so does the number of children at risk for injury. To examine pediatric soccer-related injuries presenting to US emergency departments from 1990 to 2003. Descriptive epidemiology study. A descriptive analysis of nationally representative, pediatric, soccer-related injury data from the US Consumer Product Safety Commission's National Electronic Injury Surveillance System. Among those 2 to 18 years of age, a nationally estimated 1597528 soccer-related injuries presented to US emergency departments from 1990 to 2003. Mean age was 13.2 years (range, 2-18 years); 58.6% were male. From 1990 to 2003, there was an increase in the absolute number of injuries among girls (P < .0001). The wrist/finger/hand (20.3%), ankle (18.2%), and knee (11.4%) were the most commonly injured body parts. The most common diagnoses were sprain/strain (35.9%), contusion/abrasion (24.1%), and fracture (23.2%). Boys were more likely to have face and head/neck injuries (17.7%; relative risk, 1.40; 95% confidence interval, 1.32-1.49; P < .01) and lacerations/punctures (7.5%; relative risk, 3.31; 95% confidence interval, 2.93-3.74; P < .01) than were girls (12.7% and 2.3%, respectively). Girls were more likely to have ankle injuries (21.8%; relative risk, 1.38; 95% confidence interval, 1.33-1.45; P < .01) and knee injuries (12.9%; relative risk, 1.25; 95% confidence interval, 1.15-1.35; P < .01) than were boys (15.7% and 10.4%, respectively). Girls were more likely to have sprains or strains (42.4%) than were boys (31.3%; relative risk, 1.36; 95% confidence interval, 1.31-1.40; P < .01). Children 2 to 4 years old sustained a higher proportion of face and head/neck injuries (41.0%) than did older children (15.5%; relative risk, 2.65; 95% confidence interval, 2.09-3.36; P < .01). When comparing these data to available national statistics that estimate participation in youth soccer, true injury rates may actually be decreasing for boys and girls. Young children should be closely supervised because of risk of head injuries and rate of hospitalization. The establishment of a national database of soccer participation and injury data is needed to better identify injury risks.

  1. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  2. Sex hormones and the risk of type 2 diabetes mellitus: A 9-year follow up among elderly men in Finland.

    PubMed

    Salminen, Marika; Vahlberg, Tero; Räihä, Ismo; Niskanen, Leo; Kivelä, Sirkka-Liisa; Irjala, Kerttu

    2015-05-01

    To analyze whether sex hormone levels predict the incidence of type2 diabetes among elderly Finnish men. This was a prospective population-based study, with a 9-year follow up period. The study population in the municipality of Lieto, Finland, consisted of elderly (age ≥64 years) men free of type 2 diabetes at baseline in 1998-1999 (n = 430). Body mass index and cardiovascular disease-adjusted hazard ratios and their 95% confidence intervals for type 2 diabetes predicted by testosterone, free testosterone, sex hormone-binding globulin, luteinizing hormone, and testosterone/luteinizing hormone were estimated. A total of 30 new cases of type 2 diabetes developed during the follow-up period. After adjustment, only higher levels of testosterone (hazard ratio for one-unit increase 0.93, 95% confidence interval 0.87-0.99, P = 0.020) and free testosterone (hazard ratio for 10-unit increase 0.96, 95% confidence interval 0.91-1.00, P = 0.044) were associated with a lower risk of incident type 2 diabetes during the follow up. These associations (0.94, 95% confidence interval 0.87-1.00, P = 0.050 and 0.95, 95% confidence interval 0.90-1.00, P = 0.035, respectively) persisted even after additional adjustment of sex hormone-binding globulin. Higher levels of testosterone and free testosterone independently predicted a reduced risk of type 2 diabetes in the elderly men. © 2014 Japan Geriatrics Society.

  3. N-acetyltransferase 2 gene polymorphism as a biomarker for susceptibility to bladder cancer in Bangladeshi population.

    PubMed

    Hosen, Md Bayejid; Islam, Jahidul; Salam, Md Abdus; Islam, Md Fakhrul; Hawlader, M Zakir Hossain; Kabir, Yearul

    2015-03-01

    To investigate the association between the three most common single nucleotide polymorphisms of the N-acetyltransferase 2 gene together with cigarette smoking and the risk of developing bladder cancer and its aggressiveness. A case-control study on 102 bladder cancer patients and 140 control subjects was conducted. The genomic DNA was extracted from peripheral white blood cells and N-acetyltransferase 2 alleles were differentiated by polymerase chain reaction-based restriction fragment length polymorphism methods. Bladder cancer risk was estimated as odds ratio and 95% confidence interval using binary logistic regression models adjusting for age and gender. Overall, N-acetyltransferase 2 slow genotypes were associated with bladder cancer risk (odds ratio=4.45; 95% confidence interval=2.26-8.77). The cigarette smokers with slow genotypes were found to have a sixfold increased risk to develop bladder cancer (odds ratio=6.05; 95% confidence interval=2.23-15.82). Patients with slow acetylating genotypes were more prone to develop high-grade (odds ratio=6.63; 95% confidence interval=1.15-38.13; P<0.05) and invasive (odds ratio=10.6; 95% confidence interval=1.00-111.5; P=0.05) tumor. N-acetyltransferase 2 slow genotype together with tobacco smoking increases bladder cancer risk. Patients with N-acetyltransferase 2 slow genotypes were more likely to develop a high-grade and invasive tumor. N-acetyltransferase 2 slow genotype is an important genetic determinant for bladder cancer in Bangladesh population. © 2014 Wiley Publishing Asia Pty Ltd.

  4. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    PubMed Central

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  5. Study of FibroTest and hyaluronic acid biological variation in healthy volunteers and comparison of serum hyaluronic acid biological variation between chronic liver diseases of different etiology and fibrotic stage using confidence intervals.

    PubMed

    Istaces, Nicolas; Gulbis, Béatrice

    2015-07-01

    Personalized ranges of liver fibrosis serum biomarkers such as FibroTest or hyaluronic acid could be used for early detection of fibrotic changes in patients with progressive chronic liver disease. Our aim was to generate reliable biological variation estimates for these two biomarkers with confidence intervals for within-subject biological variation and reference change value. Nine fasting healthy volunteers and 66 chronic liver disease patients were included. Biological variation estimates were calculated for FibroTest in healthy volunteers, and for hyaluronic acid in healthy volunteers and chronic liver disease patients stratified by etiology and liver fibrosis stage. In healthy volunteers, within-subject biological coefficient of variation (with 95% confidence intervals) and index of individuality were 20% (16%-28%) and 0.6 for FibroTest and 34% (27%-47%) and 0.79 for hyaluronic acid, respectively. Overall hyaluronic acid within-subject biological coefficient of variation was similar among non-alcoholic fatty liver disease and chronic hepatitis C with 41% (34%-52%) and 45% (39%-55%), respectively, in contrast to chronic hepatitis B with 170% (140%-215%). Hyaluronic acid within-subject biological coefficients of variation were similar between F0-F1, F2 and F3 liver fibrosis stages in non-alcoholic fatty liver disease with 34% (25%-49%), 41% (31%-59%) and 34% (23%-62%), respectively, and in chronic hepatitis C with 34% (27%-47%), 33% (26%-45%) and 38% (27%-65%), respectively. However, corresponding hyaluronic acid indexes of individuality were lower in the higher fibrosis stages. Non-overlapping confidence intervals of biological variation estimates allowed us to detect significant differences regarding hyaluronic acid biological variation between chronic liver disease subgroups. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. Association of Race With Mortality and Cardiovascular Events in a Large Cohort of US Veterans.

    PubMed

    Kovesdy, Csaba P; Norris, Keith C; Boulware, L Ebony; Lu, Jun L; Ma, Jennie Z; Streja, Elani; Molnar, Miklos Z; Kalantar-Zadeh, Kamyar

    2015-10-20

    In the general population, blacks experience higher mortality than their white peers, attributed in part to their lower socioeconomic status, reduced access to care, and possibly intrinsic biological factors. Patients with kidney disease are a notable exception, among whom blacks experience lower mortality. It is unclear if similar differences affecting outcomes exist in patients with no kidney disease but with equal or similar access to health care. We compared all-cause mortality, incident coronary heart disease, and incident ischemic stroke using multivariable-adjusted Cox models in a nationwide cohort of 547 441 black and 2 525 525 white patients with baseline estimated glomerular filtration rate ≥ 60 mL·min⁻¹·1.73 m⁻² receiving care from the US Veterans Health Administration. In parallel analyses, we compared outcomes in black versus white individuals in the National Health and Nutrition Examination Survey (NHANES) 1999 to 2004. After multivariable adjustments in veterans, black race was associated with 24% lower all-cause mortality (adjusted hazard ratio, 0.76; 95% confidence interval, 0.75-0.77; P<0.001) and 37% lower incidence of coronary heart disease (adjusted hazard ratio, 0.63; 95% confidence interval, 0.62-0.65; P<0.001) but a similar incidence of ischemic stroke (adjusted hazard ratio, 0.99; 95% confidence interval, 0.97-1.01; P=0.3). Black race was associated with a 42% higher adjusted mortality among individuals with estimated glomerular filtration rate ≥ 60 mL·min⁻¹·1.73 m⁻² in NHANES (adjusted hazard ratio, 1.42; 95% confidence interval, 1.09-1.87). Black veterans with normal estimated glomerular filtration rate and equal access to healthcare have lower all-cause mortality and incidence of coronary heart disease and a similar incidence of ischemic stroke. These associations are in contrast to the higher mortality experienced by black individuals in the general US population. © 2015 American Heart Association, Inc.

  7. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    PubMed

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  8. Fluconazole use and birth defects in the National Birth Defects Prevention Study.

    PubMed

    Howley, Meredith M; Carter, Tonia C; Browne, Marilyn L; Romitti, Paul A; Cunniff, Christopher M; Druschel, Charlotte M

    2016-05-01

    Low-dose fluconazole is used commonly to treat vulvovaginal candidiasis, a condition occurring frequently during pregnancy. Conflicting information exists on the association between low-dose fluconazole use among pregnant women and the risk of major birth defects. We used data from the National Birth Defects Prevention Study to examine this association. The National Birth Defects Prevention Study is a multisite, population-based, case-control study that includes pregnancies with estimated delivery dates from 1997 to 2011. Information on fluconazole use in early pregnancy was collected by self-report from 31,645 mothers of birth defect cases and 11,612 mothers of unaffected controls. Adjusted odds ratios and 95% confidence intervals were estimated for birth defects with 5 or more exposed cases; crude odds ratios and exact 95% confidence intervals were estimated for birth defects with 3-4 exposed cases. Of the 43,257 mothers analyzed, 44 case mothers and 6 control mothers reported using fluconazole. Six exposed infants had cleft lip with cleft palate, 4 had an atrial septal defect, and each of the following defects had 3 exposed cases: hypospadias, tetralogy of Fallot, d-transposition of the great arteries, and pulmonary valve stenosis. Fluconazole use was associated with cleft lip with cleft palate (odds ratio = 5.53; confidence interval = 1.68-18.24) and d-transposition of the great arteries (odds ratio = 7.56; confidence interval = 1.22-35.45). The associations between fluconazole and both cleft lip with cleft palate and d-transposition of the great arteries are consistent with earlier published case reports but not recent epidemiologic studies. Despite the larger sample size of the National Birth Defects Prevention Study, fluconazole use was rare. Further investigation is needed in large studies, with particular emphasis on oral clefts and conotruncal heart defects. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. The Quality of Reporting of Measures of Precision in Animal Experiments in Implant Dentistry: A Methodological Study.

    PubMed

    Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio

    2016-01-01

    Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.

  10. Effect Sizes and their Intervals: The Two-Level Repeated Measures Case

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2005-01-01

    Probability coverage for eight different confidence intervals (CIs) of measures of effect size (ES) in a two-level repeated measures design was investigated. The CIs and measures of ES differed with regard to whether they used least squares or robust estimates of central tendency and variability, whether the end critical points of the interval…

  11. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  12. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  13. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2007-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  14. Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival.

    PubMed

    Ishwaran, Hemant; Lu, Min

    2018-06-04

    Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.

  15. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    PubMed

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  16. Effects of aerodynamic heating and TPS thermal performance uncertainties on the Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Goodrich, W. D.; Derry, S. M.; Maraia, R. J.

    1980-01-01

    A procedure for estimating uncertainties in the aerodynamic-heating and thermal protection system (TPS) thermal-performance methodologies developed for the Shuttle Orbiter is presented. This procedure is used in predicting uncertainty bands around expected or nominal TPS thermal responses for the Orbiter during entry. Individual flowfield and TPS parameters that make major contributions to these uncertainty bands are identified and, by statistical considerations, combined in a manner suitable for making engineering estimates of the TPS thermal confidence intervals and temperature margins relative to design limits. Thus, for a fixed TPS design, entry trajectories for future Orbiter missions can be shaped subject to both the thermal-margin and confidence-interval requirements. This procedure is illustrated by assessing the thermal margins offered by selected areas of the existing Orbiter TPS design for an entry trajectory typifying early flight test missions.

  17. The Population Size of the Lesser Bandicoot (Bandicota bengalensis) in Three Markets in Penang, Malaysia

    PubMed Central

    Khairuddin, Nurul Liyana; Raghazli, Razlina; Sah, Shahrul Anuar Mohd; Shafie, Nur Juliani; Azman, Nur Munira

    2011-01-01

    A study of the population size of Bandicota bengalensis rats in three markets in Penang was conducted from April 2004 through May 2005. Taman Tun Sardon Market (TTS), Batu Lanchang Market (BTLG) and Bayan Lepas Market (BYNLP) were surveyed. Six sampling sessions were conducted in each market for four consecutive nights per session. The total captures of B. bengalensis in TTS, BTLG and BYNLP were 92%, 73% and 89% respectively. The total population of B. bengalensis in TTS was estimated as 265.4 (with a 95% confidence interval of 180.9–424.2). The total population at BTLG was estimated as 69.9 (with a 95% confidence interval of 35.5–148.9). At BYNLP, the total population was estimated as 134.7 (with a 95% confidence interval of 77.8–278.4). In general, adult male rats were captured most frequently at each site (55.19%), followed by adult females (31.69%), juvenile males (9.84%) and juvenile females (3.27%). The results showed that the number of rats captured at each site differed significantly according to sex ratio and maturity (χ2 = 121.45, df = 3, p<0.01). Our results suggest that the population sizes found by the study may not represent the actual population size in each market owing to the low numbers of rats recaptured. This finding might have resulted from the variety of foods available in the markets. PMID:24575219

  18. Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference

    USGS Publications Warehouse

    Olea, R.A.; Pardo-Iguzquiza, E.

    2011-01-01

    The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.

  19. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  20. New estimates of elasticity of demand for healthcare in rural China.

    PubMed

    Zhou, Zhongliang; Su, Yanfang; Gao, Jianmin; Xu, Ling; Zhang, Yaoguang

    2011-12-01

    Only limited empirical studies reported own-price elasticity of demand for health care in rural China. Neither research on income elasticity of demand for health care nor cross-price elasticity of demand for inpatient versus outpatient services in rural China has been reported. However, elasticity of demand is informative to evaluate current policy and to guide further policy making. Our study contributes to the literature by estimating three elasticities (i.e., own-price elasticity, cross-price elasticity, and income elasticity of demand for health care based on nationwide-representative data. We aim to answer three empirical questions with regard to health expenditure in rural China: (1) Which service is more sensitive to price change, outpatient or inpatient service? (2) Is outpatient service a substitute or complement to inpatient service? and (3) Does demand for inpatient services grow faster than demand for outpatient services with income growth? Based on data from a National Health Services Survey, a Probit regression model with probability of outpatient visit and probability of inpatient visit as dependent variables and a zero-truncated negative binomial regression model with outpatient visits as dependent variable were constructed to isolate the effects of price and income on demand for health care. Both pooled and separated regressions for 2003 and 2008 were conducted with tests of robustness. Own-price elasticities of demand for first outpatient visit, outpatient visits among users and first inpatient visit are -0.519 [95% confidence interval (-0.703, -0.336)], -0.547 [95% confidence interval (-0.747, -0.347)] and -0.372 [95% confidence interval (-0.517, -0.226)], respectively. Cross-price elasticities of demand for first outpatient visit, outpatient visits among users and first inpatient visit are 0.073 [95% confidence interval (-0.176, 0.322)], 0.308 [95% confidence interval (0.087, 0.528)], and 0.059 [95% confidence interval (-0.085, 0.204)], respectively. Income elasticities of demand for first outpatient visit, outpatient visits among users and first inpatient visit are 0.098 [95% confidence interval (0.018, 0.178)], 0.136 [95% confidence interval (0.028, 0.245)] and 0.521 [95% confidence interval (0.438, 0.605)], respectively. The aforementioned results are in 2008, which hold similar pattern as results in 2003 as well as results from pooled data of two periods. First, no significant difference is detected between sensitivity of outpatient services and sensitivity of inpatient services, responding to own-price change. Second, inpatient services are substitutes to outpatient services. Third, the growth of inpatient services is faster than the growth in outpatient services in response to income growth. The major findings from this paper suggest refining insurance policy in rural China. First, from a cost-effectiveness perspective, changing outpatient price is at least as effective as changing inpatient price to adjust demand of health care. Second, the current national guideline of healthcare reform to increase the reimbursement rate for inpatient services will crowd out outpatient services; however, we have no evidence about the change in demand for inpatient service if insurance covers outpatient services. Third, a referral system and gate-keeping system should be established to guide rural patients to utilize outpatient service. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. Pancreatic β-Cell Function and Prognosis of Nondiabetic Patients With Ischemic Stroke.

    PubMed

    Pan, Yuesong; Chen, Weiqi; Jing, Jing; Zheng, Huaguang; Jia, Qian; Li, Hao; Zhao, Xingquan; Liu, Liping; Wang, Yongjun; He, Yan; Wang, Yilong

    2017-11-01

    Pancreatic β-cell dysfunction is an important factor in the development of type 2 diabetes mellitus. This study aimed to estimate the association between β-cell dysfunction and prognosis of nondiabetic patients with ischemic stroke. Patients with ischemic stroke without a history of diabetes mellitus in the ACROSS-China (Abnormal Glucose Regulation in Patients with Acute Stroke across China) registry were included. Disposition index was estimated as computer-based model of homeostatic model assessment 2-β%/homeostatic model assessment 2-insulin resistance based on fasting C-peptide level. Outcomes included stroke recurrence, all-cause death, and dependency (modified Rankin Scale, 3-5) at 12 months after onset. Among 1171 patients, 37.2% were women with a mean age of 62.4 years. At 12 months, 167 (14.8%) patients had recurrent stroke, 110 (9.4%) died, and 184 (16.0%) had a dependency. The first quartile of the disposition index was associated with an increased risk of stroke recurrence (adjusted hazard ratio, 3.57; 95% confidence interval, 2.13-5.99) and dependency (adjusted hazard ratio, 2.30; 95% confidence interval, 1.21-4.38); both the first and second quartiles of the disposition index were associated with an increased risk of death (adjusted hazard ratio, 5.09; 95% confidence interval, 2.51-10.33; adjusted hazard ratio, 2.42; 95% confidence interval, 1.17-5.03) compared with the fourth quartile. Using a multivariable regression model with restricted cubic spline, we observed an L-shaped association between the disposition index and the risk of each end point. In this large-scale registry, β-cell dysfunction was associated with an increased risk of 12-month poor prognosis in nondiabetic patients with ischemic stroke. © 2017 American Heart Association, Inc.

  2. Nationwide Estimates of 30-Day Readmission in Patients With Ischemic Stroke.

    PubMed

    Vahidy, Farhaan S; Donnelly, John P; McCullough, Louise D; Tyson, Jon E; Miller, Charles C; Boehme, Amelia K; Savitz, Sean I; Albright, Karen C

    2017-05-01

    Readmission within 30 days of hospital discharge for ischemic stroke is an important quality of care metric. We aimed to provide nationwide estimates of 30-day readmission in the United States, describe important reasons for readmission, and sought to explore factors associated with 30-day readmission, particularly the association with recanalization therapy. We conducted a weighted analysis of the 2013 Nationwide Readmission Database to represent all US hospitalizations. Adult patients with acute ischemic stroke including those who received intravenous tissue-type plasminogen activator and intra-arterial therapy were identified using International Classification of Diseases -Ninth Revision codes. Readmissions were defined as any readmission during the 30-day post-index hospitalization discharge period for the eligible patient population. Proportions and 95% confidence intervals for overall 30-day readmissions and for unplanned and potentially preventable readmissions are reported. Survey design logistic regression models were fit for determining crude and adjusted odds ratios and 95% confidence interval for association between recanalization therapy and 30-day readmission. Of the 319 317 patients with acute ischemic stroke, 12.1% (95% confidence interval, 11.9-12.3) were readmitted. Of these, 89.6% were unplanned and 12.9% were potentially preventable. More than 20% of all readmissions were attributable to acute cerebrovascular disease. Readmitted patients were older and had a higher comorbidity burden. After controlling for age, sex, insurance status, and comorbidities, patients who underwent recanalization therapy had significantly lower odds of 30-day readmission (odds ratio, 0.82; 95% confidence interval, 0.77-0.89). Up to 12% of patients with ischemic stroke get readmitted within 30 days post-discharge period, and recanalization therapy is associated with 11% to 23% lower odds of 30-day readmission. © 2017 American Heart Association, Inc.

  3. Disparities in access to preventive health care services among insured children in a cross sectional study.

    PubMed

    King, Christian

    2016-07-01

    Children with insurance have better access to care and health outcomes if their parents also have insurance. However, little is known about whether the type of parental insurance matters. This study attempts to determine whether the type of parental insurance affects the access to health care services of children.I used data from the 2009-2013 Medical Expenditure Panel Survey and estimated multivariate logistic regressions (N = 26,152). I estimated how family insurance coverage affects the probability that children have a usual source of care, well-child visits in the past year, unmet medical and prescription needs, less than 1 dental visit per year, and unmet dental needs.Children in families with mixed insurance (child publicly insured and parent privately insured) were less likely to have a well-child visit than children in privately insured families (odds ratio = 0.86, 95% confidence interval 0.76-0.98). When restricting the sample to publicly insured children, children with privately insured parents were less likely to have a well-child visit (odds ratio = 0.82, 95% confidence interval 0.73-0.92), less likely to have a usual source of care (odds ratio = 0.79, 95% confidence interval 0.67-0.94), and more likely to have unmet dental needs (odds ratio = 1.68, 95% confidence interval 1.10-2.58).Children in families with mixed insurance tend to fare poorly compared to children in publicly insured families. This may indicate that children in these families may be underinsured. Expanding parental eligibility for public insurance or subsidizing private insurance for children would potentially improve their access to preventive care.

  4. Cardiac rehabilitation after percutaneous coronary intervention: Results from a nationwide survey.

    PubMed

    Olsen, Siv Js; Schirmer, Henrik; Bønaa, Kaare H; Hanssen, Tove A

    2018-03-01

    The purpose of this study was to estimate the proportion of Norwegian coronary heart disease patients participating in cardiac rehabilitation programmes after percutaneous coronary intervention, and to determine predictors of cardiac rehabilitation participation. Participants were patients enrolled in the Norwegian Coronary Stent Trial. We assessed cardiac rehabilitation participation in 9013 of these patients who had undergone their first percutaneous coronary intervention during 2008-2011. Of these, 7068 patients (82%) completed a self-administered questionnaire on cardiac rehabilitation participation within three years after their percutaneous coronary intervention. Twenty-eight per cent of the participants reported engaging in cardiac rehabilitation. Participation rate differed among the four regional health authorities in Norway, varying from 20%-31%. Patients undergoing percutaneous coronary intervention for an acute coronary syndrome were more likely to participate in cardiac rehabilitation than patients with stable angina (odds ratio 3.2; 95% confidence interval 2.74-3.76). A multivariate statistical model revealed that men had a 28% lower probability ( p<0.001) of participating in cardiac rehabilitation, and the odds of attending cardiac rehabilitation decreased with increasing age ( p<0.001). Contributors to higher odds of cardiac rehabilitation participation were educational level >12 years (odds ratio 1.50; 95% confidence interval 1.32-1.71) and body mass index>25 (odds ratio 1.19; 95% confidence interval 1.05-1.36). Prior coronary artery bypass graft was associated with lower odds of cardiac rehabilitation participation (odds ratio 0.47; 95% confidence interval 0.32-0.70) Conclusion: The estimated cardiac rehabilitation participation rate among patients undergoing first-time percutaneous coronary intervention is low in Norway. The typical participant is young, overweight, well-educated, and had an acute coronary event. These results varied by geographical region.

  5. A Comparison of Regional and SiteSpecific Volume Estimation Equations

    Treesearch

    Joe P. McClure; Jana Anderson; Hans T. Schreuder

    1987-01-01

    Regression equations for volume by region and site class were examined for lobiolly pine. The regressions for the Coastal Plain and Piedmont regions had significantly different slopes. The results shared important practical differences in percentage of confidence intervals containing the true total volume and in percentage of estimates within a specific proportion of...

  6. Lower Plasma Fetuin-A Levels Are Associated With a Higher Mortality Risk in Patients With Coronary Artery Disease.

    PubMed

    Chen, Xuechen; Zhang, Yuan; Chen, Qian; Li, Qing; Li, Yanping; Ling, Wenhua

    2017-11-01

    The present study was designed to evaluate the association of circulating fetuin-A with cardiovascular disease (CVD) and all-cause mortality. We measured plasma fetuin-A in 1620 patients using an enzyme-linked immunosorbent assay kit. The patients were members of the Guangdong coronary artery disease cohort and were recruited between October 2008 and December 2011. Cox regression models were used to estimate the association between plasma fetuin-A and the risk of mortality. A total of 206 deaths were recorded during a median follow-up of 5.9 years, 146 of whom died from CVD. The hazard ratios for the second and third tertiles of the fetuin-A levels (using the first tertile as a reference) were 0.65 (95% confidence interval, 0.44-0.96) and 0.51 (95% confidence interval, 0.33-0.78) for CVD mortality ( P =0.005) and 0.65 (95% confidence interval, 0.47-0.91) and 0.48 (95% confidence interval, 0.33-0.70) for all-cause mortality ( P <0.001), respectively. Lower plasma fetuin-A levels were associated with an increased risk of all-cause and CVD mortality in patients with coronary artery disease independently of traditional CVD risk factors. © 2017 American Heart Association, Inc.

  7. A prospective observational cohort study to assess the incidence of acute otitis media among children 0-5 years of age in Southern Brazil.

    PubMed

    Lanzieri, Tatiana M; Cunha, Clóvis Arns da; Cunha, Rejane B; Arguello, D Fermin; Devadiga, Raghavendra; Sanchez, Nervo; Barria, Eduardo Ortega

    To estimate acute otitis media incidence among young children and impact on quality of life of parents/caregivers in a southern Brazilian city. Prospective cohort study including children 0-5 years of age registered at a private pediatric practice. Acute otitis media episodes diagnosed by a pediatrician and impact on quality of life of parents/caregivers were assessed during a 12-month follow-up. During September 2008-March 2010, of 1,136 children enrolled in the study, 1074 (95%) were followed: 55.0% were ≤2 years of age, 52.3% males, 94.7% white, and 69.2% had previously received pneumococcal vaccine in private clinics. Acute otitis media incidence per 1000 person-years was 95.7 (95% confidence interval: 77.2-117.4) overall, 105.5 (95% confidence interval: 78.3-139.0) in children ≤2 years of age and 63.6 (95% confidence interval: 43.2-90.3) in children 3-5 years of age. Acute otitis media incidence per 1000 person-years was 86.3 (95% confidence interval: 65.5-111.5) and 117.1 (95% confidence interval: 80.1-165.3) among vaccinated and unvaccinated children, respectively. Nearly 68.9% of parents reported worsening of their overall quality of life. Acute otitis media incidence among unvaccinated children in our study may be useful as baseline data to assess impact of pneumococcal vaccine introduction in the Brazilian National Immunization Program in April 2010. Copyright © 2017 Sociedade Brasileira de Infectologia. Published by Elsevier Editora Ltda. All rights reserved.

  8. Performance of the European System for Cardiac Operative Risk Evaluation II: a meta-analysis of 22 studies involving 145,592 cardiac surgery procedures.

    PubMed

    Guida, Pietro; Mastro, Florinda; Scrascia, Giuseppe; Whitlock, Richard; Paparella, Domenico

    2014-12-01

    A systematic review of the European System for Cardiac Operative Risk Evaluation (euroSCORE) II performance for prediction of operative mortality after cardiac surgery has not been performed. We conducted a meta-analysis of studies based on the predictive accuracy of the euroSCORE II. We searched the Embase and PubMed databases for all English-only articles reporting performance characteristics of the euroSCORE II. The area under the receiver operating characteristic curve, the observed/expected mortality ratio, and observed-expected mortality difference with their 95% confidence intervals were analyzed. Twenty-two articles were selected, including 145,592 procedures. Operative mortality occurred in 4293 (2.95%), whereas the expected events according to euroSCORE II were 4802 (3.30%). Meta-analysis of these studies provided an area under the receiver operating characteristic curve of 0.792 (95% confidence interval, 0.773-0.811), an estimated observed/expected ratio of 1.019 (95% confidence interval, 0.899-1.139), and observed-expected difference of 0.125 (95% confidence interval, -0.269 to 0.519). Statistical heterogeneity was detected among retrospective studies including less recent procedures. Subgroups analysis confirmed the robustness of combined estimates for isolated valve procedures and those combined with revascularization surgery. A significant overestimation of the euroSCORE II with an observed/expected ratio of 0.829 (95% confidence interval, 0.677-0.982) was observed in isolated coronary artery bypass grafting and a slight underestimation of predictions in high-risk patients (observed/expected ratio 1.253 and observed-expected difference 1.859). Despite the heterogeneity, the results from this meta-analysis show a good overall performance of the euroSCORE II in terms of discrimination and accuracy of model predictions for operative mortality. Validation of the euroSCORE II in prospective populations needs to be further studied for a continuous improvement of patients' risk stratification before cardiac surgery. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  9. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  10. Risks of Large Portfolios

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng

    2014-01-01

    The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851

  11. Obtaining appropriate interval estimates for age when multiple indicators are used: evaluation of an ad-hoc procedure.

    PubMed

    Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick

    2016-03-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.

  12. Association of Pulse Wave Velocity With Chronic Kidney Disease Progression and Mortality: Findings From the CRIC Study (Chronic Renal Insufficiency Cohort).

    PubMed

    Townsend, Raymond R; Anderson, Amanda Hyre; Chirinos, Julio A; Feldman, Harold I; Grunwald, Juan E; Nessel, Lisa; Roy, Jason; Weir, Matthew R; Wright, Jackson T; Bansal, Nisha; Hsu, Chi-Yuan

    2018-06-01

    Patients with chronic kidney diseases (CKDs) are at risk for further loss of kidney function and death, which occur despite reasonable blood pressure treatment. To determine whether arterial stiffness influences CKD progression and death, independent of blood pressure, we conducted a prospective cohort study of CKD patients enrolled in the CRIC study (Chronic Renal Insufficiency Cohort). Using carotid-femoral pulse wave velocity (PWV), we examined the relationship between PWV and end-stage kidney disease (ESRD), ESRD or halving of estimated glomerular filtration rate, or death from any cause. The 2795 participants we enrolled had a mean age of 60 years, 56.4% were men, 47.3% had diabetes mellitus, and the average estimated glomerular filtration rate at entry was 44.4 mL/min per 1.73 m 2 During follow-up, there were 504 ESRD events, 628 ESRD or halving of estimated glomerular filtration rate events, and 394 deaths. Patients with the highest tertile of PWV (>10.3 m/s) were at higher risk for ESRD (hazard ratio [95% confidence interval], 1.37 [1.05-1.80]), ESRD or 50% decline in estimated glomerular filtration rate (hazard ratio [95% confidence interval], 1.25 [0.98-1.58]), or death (hazard ratio [95% confidence interval], 1.72 [1.24-2.38]). PWV is a significant predictor of CKD progression and death in people with impaired kidney function. Incorporation of PWV measurements may help define better the risks for these important health outcomes in patients with CKDs. Interventions that reduce aortic stiffness deserve study in people with CKD. © 2018 American Heart Association, Inc.

  13. The Safety and Efficacy of Approaches to Liver Resection: A Meta-Analysis

    PubMed Central

    Hauch, Adam; Hu, Tian; Buell, Joseph F.; Slakey, Douglas P.; Kandil, Emad

    2015-01-01

    Background: The aim of this study is to compare the safety and efficacy of conventional laparotomy with those of robotic and laparoscopic approaches to hepatectomy. Database: Independent reviewers conducted a systematic review of publications in PubMed and Embase, with searches limited to comparative articles of laparoscopic hepatectomy with either conventional or robotic liver approaches. Outcomes included total operative time, estimated blood loss, length of hospitalization, resection margins, postoperative complications, perioperative mortality rates, and cost measures. Outcome comparisons were calculated using random-effects models to pool estimates of mean net differences or of the relative risk between group outcomes. Forty-nine articles, representing 3702 patients, comprise this analysis: 1901 (51.35%) underwent a laparoscopic approach, 1741 (47.03%) underwent an open approach, and 60 (1.62%) underwent a robotic approach. There was no difference in total operative times, surgical margins, or perioperative mortality rates among groups. Across all outcome measures, laparoscopic and robotic approaches showed no difference. As compared with the minimally invasive groups, patients undergoing laparotomy had a greater estimated blood loss (pooled mean net change, 152.0 mL; 95% confidence interval, 103.3–200.8 mL), a longer length of hospital stay (pooled mean difference, 2.22 days; 95% confidence interval, 1.78–2.66 days), and a higher total complication rate (odds ratio, 0.5; 95% confidence interval, 0.42–0.57). Conclusion: Minimally invasive approaches to liver resection are as safe as conventional laparotomy, affording less estimated blood loss, shorter lengths of hospitalization, lower perioperative complication rates, and equitable oncologic integrity and postoperative mortality rates. There was no proven advantage of robotic approaches compared with laparoscopic approaches. PMID:25848191

  14. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy: not all AIDS-defining conditions are created equal.

    PubMed

    Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias; May, Margaret; Grabar, Sophie; Furrer, Hansjakob; Sabin, Caroline; Fatkenheuer, Gerd; Justice, Amy; Reiss, Peter; d'Arminio Monforte, Antonella; Gill, John; Hogg, Robert; Bonnet, Fabrice; Kitahata, Mari; Staszewski, Schlomo; Casabona, Jordi; Harris, Ross; Saag, Michael

    2009-04-15

    The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. We analyzed data from 31,620 patients with no prior ADEs who started combination antiretroviral therapy. Cox proportional hazards models were used to estimate mortality hazard ratios for each ADE that occurred in >50 patients, after stratification by cohort and adjustment for sex, HIV transmission group, number of antiretroviral drugs initiated, regimen, age, date of starting combination antiretroviral therapy, and CD4+ cell count and HIV RNA load at initiation of combination antiretroviral therapy. ADEs that occurred in <50 patients were grouped together to form a "rare ADEs" category. During a median follow-up period of 43 months (interquartile range, 19-70 months), 2880 ADEs were diagnosed in 2262 patients; 1146 patients died. The most common ADEs were esophageal candidiasis (in 360 patients), Pneumocystis jiroveci pneumonia (320 patients), and Kaposi sarcoma (308 patients). The greatest mortality hazard ratio was associated with non-Hodgkin's lymphoma (hazard ratio, 17.59; 95% confidence interval, 13.84-22.35) and progressive multifocal leukoencephalopathy (hazard ratio, 10.0; 95% confidence interval, 6.70-14.92). Three groups of ADEs were identified on the basis of the ranked hazard ratios with bootstrapped confidence intervals: severe (non-Hodgkin's lymphoma and progressive multifocal leukoencephalopathy [hazard ratio, 7.26; 95% confidence interval, 5.55-9.48]), moderate (cryptococcosis, cerebral toxoplasmosis, AIDS dementia complex, disseminated Mycobacterium avium complex, and rare ADEs [hazard ratio, 2.35; 95% confidence interval, 1.76-3.13]), and mild (all other ADEs [hazard ratio, 1.47; 95% confidence interval, 1.08-2.00]). In the combination antiretroviral therapy era, mortality rates subsequent to an ADE depend on the specific diagnosis. The proposed classification of ADEs may be useful in clinical end point trials, prognostic studies, and patient management.

  15. Out-of-range INR values and outcomes among new warfarin patients with non-valvular atrial fibrillation.

    PubMed

    Nelson, Winnie W; Wang, Li; Baser, Onur; Damaraju, Chandrasekharrao V; Schein, Jeffrey R

    2015-02-01

    Although efficacious in stroke prevention in non-valvular atrial fibrillation, many warfarin patients are sub-optimally managed. To evaluate the association of international normalized ratio control and clinical outcomes among new warfarin patients with non-valvular atrial fibrillation. Adult non-valvular atrial fibrillation patients (≥18 years) initiating warfarin treatment were selected from the US Veterans Health Administration dataset between 10/2007 and 9/2012. Valid international normalized ratio values were examined from the warfarin initiation date through the earlier of the first clinical outcome, end of warfarin exposure or death. Each patient contributed multiple in-range and out-of-range time periods. The relative risk ratios of clinical outcomes associated with international normalized ratio control were estimated. 34,346 patients were included for analysis. During the warfarin exposure period, the incidence of events per 100 person-years was highest when patients had international normalized ratio <2:13.66 for acute coronary syndrome; 10.30 for ischemic stroke; 2.93 for transient ischemic attack; 1.81 for systemic embolism; and 4.55 for major bleeding. Poisson regression confirmed that during periods with international normalized ratio <2, patients were at increased risk of developing acute coronary syndrome (relative risk ratio: 7.9; 95 % confidence interval 6.9-9.1), ischemic stroke (relative risk ratio: 7.6; 95 % confidence interval 6.5-8.9), transient ischemic attack (relative risk ratio: 8.2; 95 % confidence interval 6.1-11.2), systemic embolism (relative risk ratio: 6.3; 95 % confidence interval 4.4-8.9) and major bleeding (relative risk ratio: 2.6; 95 % confidence interval 2.2-3.0). During time periods with international normalized ratio >3, patients had significantly increased risk of major bleeding (relative risk ratio: 1.5; 95 % confidence interval 1.2-2.0). In a Veterans Health Administration non-valvular atrial fibrillation population, exposure to out-of-range international normalized ratio values was associated with significantly increased risk of adverse clinical outcomes.

  16. A comment on "Novel scavenger removal trials increase wind turbine-caused avian fatality estimates"

    USGS Publications Warehouse

    Huso, Manuela M.P.; Erickson, Wallace P.

    2013-01-01

    In a recent paper, Smallwood et al. (2010) conducted a study to compare their “novel” approach to conducting carcass removal trials with what they term the “conventional” approach and to evaluate the effects of the different methods on estimated avian fatality at a wind power facility in California. A quick glance at Table 3 that succinctly summarizes their results and provides estimated fatality rates and 80% confidence intervals calculated using the 2 methods reveals a surprising result. The confidence intervals of all of their estimates and most of the conventional estimates extend below 0. These results imply that wind turbines may have the capacity to create live birds. But a more likely interpretation is that a serious error occurred in the calculation of either the average fatality rate or its standard error or both. Further evaluation of their methods reveals that the scientific basis for concluding that “many estimates of scavenger removal rates prior to [their] study were likely biased low due to scavenger swamping” and “previously reported estimates of avian fatality rates … should be adjusted upwards” was not evident in their analysis and results. Their comparison to conventional approaches was not applicable, their statistical models were questionable, and the conclusions they drew were unsupported.

  17. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    PubMed

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  18. Poisson and negative binomial item count techniques for surveys with sensitive question.

    PubMed

    Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin

    2017-04-01

    Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.

  19. Estimation of treatment effect in a subpopulation: An empirical Bayes approach.

    PubMed

    Shen, Changyu; Li, Xiaochun; Jeong, Jaesik

    2016-01-01

    It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.

  20. Alternative methods to evaluate trial level surrogacy.

    PubMed

    Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert

    2008-01-01

    The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.

  1. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.

  2. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    ERIC Educational Resources Information Center

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  3. Hypertensive disorders during pregnancy and risk of type 2 diabetes in later life: a systematic review and meta-analysis.

    PubMed

    Wang, Zengfang; Wang, Zengyan; Wang, Luang; Qiu, Mingyue; Wang, Yangang; Hou, Xu; Guo, Zhong; Wang, Bin

    2017-03-01

    Many studies assessed the association between hypertensive disorders during pregnancy and risk of type 2 diabetes mellitus in later life, but contradictory findings were reported. A systemic review and meta-analysis was carried out to elucidate type 2 diabetes mellitus risk in women with hypertensive disorders during pregnancy. Pubmed, Embase, and Web of Science were searched for cohort or case-control studies on the association between hypertensive disorders during pregnancy and subsequent type 2 diabetes mellitus. Random-effect model was used to pool risk estimates. Bayesian meta-analysis was carried out to further estimate the type 2 diabetes mellitus risk associated with hypertensive disorders during pregnancy. Seventeen cohort or prospective matched case-control studies were finally included. Those 17 studies involved 2,984,634 women and 46,732 type 2 diabetes mellitus cases. Overall, hypertensive disorders during pregnancy were significantly correlated with type 2 diabetes mellitus risk (relative risk = 1.56, 95 % confidence interval 1.21-2.01, P = 0.001). Preeclampsia was significantly and independently correlated with type 2 diabetes mellitus risk (relative risk = 2.25, 95 % confidence interval 1.73-2.90, P < 0.001). In addition, gestational hypertension was also significantly and independently correlated with subsequent type 2 diabetes mellitus risk (relative risk = 2.06, 95 % confidence interval 1.57-2.69, P < 0.001). The pooled estimates were not significantly altered in the subgroup analyses of studies on preeclampsia or gestational hypertension. Bayesian meta-analysis showed the relative risks of type 2 diabetes mellitus risk for individuals with hypertensive disorders during pregnancy, preeclampsia, and gestational hypertension were 1.59 (95 % credibility interval: 1.11-2.32), 2.27 (95 % credibility interval: 1.67-2.97), and 2.06 (95 % credibility interval: 1.41-2.84), respectively. Publication bias was not evident in the meta-analysis. Preeclampsia and gestational hypertension are independently associated with substantially elevated risk of type 2 diabetes mellitus in later life.

  4. Landslide susceptibility near highways is increased by one order of magnitude in the Andes of southern Ecuador, Loja province

    NASA Astrophysics Data System (ADS)

    Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.

    2014-03-01

    Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic and geological predictors as possible confounders. A spatial block bootstrap was used to obtain non-parametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21 with lower 95% confidence bounds > 13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than one order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.

  5. Landslide susceptibility near highways is increased by 1 order of magnitude in the Andes of southern Ecuador, Loja province

    NASA Astrophysics Data System (ADS)

    Brenning, A.; Schwinn, M.; Ruiz-Páez, A. P.; Muenchow, J.

    2015-01-01

    Mountain roads in developing countries are known to increase landslide occurrence due to often inadequate drainage systems and mechanical destabilization of hillslopes by undercutting and overloading. This study empirically investigates landslide initiation frequency along two paved interurban highways in the tropical Andes of southern Ecuador across different climatic regimes. Generalized additive models (GAM) and generalized linear models (GLM) were used to analyze the relationship between mapped landslide initiation points and distance to highway while accounting for topographic, climatic, and geological predictors as possible confounders. A spatial block bootstrap was used to obtain nonparametric confidence intervals for the odds ratio of landslide occurrence near the highways (25 m distance) compared to a 200 m distance. The estimated odds ratio was 18-21, with lower 95% confidence bounds >13 in all analyses. Spatial bootstrap estimation using the GAM supports the higher odds ratio estimate of 21.2 (95% confidence interval: 15.5-25.3). The highway-related effects were observed to fade at about 150 m distance. Road effects appear to be enhanced in geological units characterized by Holocene gravels and Laramide andesite/basalt. Overall, landslide susceptibility was found to be more than 1 order of magnitude higher in close proximity to paved interurban highways in the Andes of southern Ecuador.

  6. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  7. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  8. Model-assisted forest yield estimation with light detection and ranging

    Treesearch

    Jacob L. Strunk; Stephen E. Reutebuch; Hans-Erik Andersen; Peter J. Gould; Robert J. McGaughey

    2012-01-01

    Previous studies have demonstrated that light detection and ranging (LiDAR)-derived variables can be used to model forest yield variables, such as biomass, volume, and number of stems. However, the next step is underrepresented in the literature: estimation of forest yield with appropriate confidence intervals. It is of great importance that the procedures required for...

  9. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  10. Estimated preejection period (PEP) based on the detection of the R-wave and dZ/dt-min peaks in ECG and ICG

    NASA Astrophysics Data System (ADS)

    van Lien, René; Schutte, Nienke M.; Meijer, Jan H.; de Geus, Eco J. C.

    2013-04-01

    The validity of estimating the PEP from a fixed value for the Q-wave onset to the R-wave peak (QR) interval and from the R-wave peak to the dZ/dt-min peak (ISTI) interval is evaluated. Ninety-one subjects participated in a laboratory experiment in which a variety of physical and mental stressors were presented and 31 further subjects participated in a sequence of structured ambulatory activities in which large variation in posture and physical activity was induced. PEP, QR interval, and ISTI were scored. Across the diverse laboratory and ambulatory conditions the QR interval could be approximated by a fixed interval of 40 ms but 95% confidence intervals were large (25 to 54 ms). Multilevel analysis showed that 79% to 81% of the within and between-subject variation in the RB interval could be predicted by the ISTI. However, the optimal intercept and slope values varied significantly across subjects and study setting. Bland-Altman plots revealed a large discrepancy between the estimated PEP and the actual PEP based on the Q-wave onset and B-point. It is concluded that the estimated PEP can be a useful tool but cannot replace the actual PEP to index cardiac sympathetic control.

  11. Impact of human papillomavirus (HPV) 16 and 18 vaccination on prevalent infections and rates of cervical lesions after excisional treatment

    PubMed Central

    Hildesheim, Allan; Gonzalez, Paula; Kreimer, Aimee R.; Wacholder, Sholom; Schussler, John; Rodriguez, Ana C.; Porras, Carolina; Schiffman, Mark; Sidawy, Mary; Schiller, John T.; Lowy, Douglas R.; Herrero, Rolando

    2016-01-01

    BACKGROUND Human papillomavirus (HPV) vaccines prevent HPV infection and cervical precancers. The impact of vaccinating women with a current infection or after treatment for an HPV-associated lesion is not fully understood. OBJECTIVES To determine whether HPV-16/18 vaccination influences the outcome of infections present at vaccination and the rate of infection and disease after treatment of lesions. STUDY DESIGN We included 1711 women (18–25 years) with carcinogenic human papillomavirus infection and 311 women of similar age who underwent treatment for cervical precancer and who participated in a community-based trial of the AS04-adjuvanted HPV-16/18 virus-like particle vaccine. Participants were randomized (human papillomavirus or hepatitis A vaccine) and offered 3 vaccinations over 6 months. Follow-up included annual visits (more frequently if clinically indicated), referral to colposcopy of high-grade and persistent low-grade lesions, treatment by loop electrosurgical excisional procedure when clinically indicated, and cytologic and virologic follow-up after treatment. Among women with human papillomavirus infection at the time of vaccination, we considered type-specific viral clearance, and development of cytologic (squamous intraepithelial lesions) and histologic (cervical intraepithelial neoplasia) lesions. Among treated women, we considered single-time and persistent human papillomavirus infection, squamous intraepithelial lesions, and cervical intraepithelial neoplasia 2+. Outcomes associated with infections absent before treatment also were evaluated. Infection-level analyses were performed and vaccine efficacy estimated. RESULTS Median follow-up was 56.7 months (women with human papillomavirus infection) and 27.3 months (treated women). There was no evidence of vaccine efficacy to increase clearance of human papillomavirus infections or decrease incidence of cytologic/histologic abnormalities associated with human papillomavirus types present at enrollment. Vaccine efficacy for human papillomavirus 16/18 clearance and against human papillomavirus 16/18 progression from infection to cervical intraepithelial neoplasia 2+ were −5.4% (95% confidence interval −19,10) and 0.3% (95% confidence interval −69,41), respectively. Among treated women, 34.1% had oncogenic infection and 1.6% had cervical intraepithelial neoplasia 2+ detected after treatment, respectively, and of these 69.8% and 20.0% were the result of new infections. We observed no significant effect of vaccination on rates of infection/lesions after treatment. Vaccine efficacy estimates for human papillomavirus 16/18 associated persistent infection and cervical intraepithelial neoplasia 2+ after treatment were 34.7% (95% confidence interval −131, 82) and −211% (95% confidence interval −2901, 68), respectively. We observed evidence for a partial and nonsignificant protective effect of vaccination against new infections absent before treatment. For incident human papillomavirus 16/18, human papillomavirus 31/33/45, and oncogenic human papillomavirus infections post-treatment, vaccine efficacy estimates were 57.9% (95% confidence interval −44, 88), 72.9% (95% confidence interval 29, 90), and 36.7% (95% confidence interval 1.5, 59), respectively. CONCLUSION We find no evidence for a vaccine effect on the fate of detectable human papillomavirus infections. We show that vaccination does not protect against infections/lesions after treatment. Evaluation of vaccine protection against new infections and resultant lesions warrants further consideration in future studies. PMID:26892991

  12. Long-Term Effects of Ambient PM2.5 on Hypertension and Blood Pressure and Attributable Risk Among Older Chinese Adults.

    PubMed

    Lin, Hualiang; Guo, Yanfei; Zheng, Yang; Di, Qian; Liu, Tao; Xiao, Jianpeng; Li, Xing; Zeng, Weilin; Cummings-Vaughn, Lenise A; Howard, Steven W; Vaughn, Michael G; Qian, Zhengmin Min; Ma, Wenjun; Wu, Fan

    2017-05-01

    Long-term exposure to ambient fine particulate pollution (PM 2.5 ) has been associated with cardiovascular diseases. Hypertension, a major risk factor for cardiovascular diseases, has also been hypothesized to be linked to PM 2.5 However, epidemiological evidence has been mixed. We examined long-term association between ambient PM 2.5 and hypertension and blood pressure. We interviewed 12 665 participants aged 50 years and older and measured their blood pressures. Annual average PM 2.5 concentrations were estimated for each community using satellite data. We applied 2-level logistic regression models to examine the associations and estimated hypertension burden attributable to ambient PM 2.5 For each 10 μg/m 3 increase in ambient PM 2.5 , the adjusted odds ratio of hypertension was 1.14 (95% confidence interval, 1.07-1.22). Stratified analyses found that overweight and obesity could enhance the association, and consumption of fruit was associated with lower risk. We further estimated that 11.75% (95% confidence interval, 5.82%-18.53%) of the hypertension cases (corresponding to 914, 95% confidence interval, 453-1442 cases) could be attributable to ambient PM 2.5 in the study population. Findings suggest that long-term exposure to ambient PM 2.5 might be an important risk factor of hypertension and is responsible for significant hypertension burden in adults in China. A higher consumption of fruit may mitigate, whereas overweight and obesity could enhance this effect. © 2017 American Heart Association, Inc.

  13. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    NASA Astrophysics Data System (ADS)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  14. Significance testing - are we ready yet to abandon its use?

    PubMed

    The, Bertram

    2011-11-01

    Understanding of the damaging effects of significance testing has steadily grown. Reporting p values without dichotomizing the result to be significant or not, is not the solution. Confidence intervals are better, but are troubled by a non-intuitive interpretation, and are often misused just to see whether the null value lies within the interval. Bayesian statistics provide an alternative which solves most of these problems. Although criticized for relying on subjective models, the interpretation of a Bayesian posterior probability is more intuitive than the interpretation of a p value, and seems to be closest to intuitive patterns of human decision making. Another alternative could be using confidence interval functions (or p value functions) to display a continuum of intervals at different levels of confidence around a point estimate. Thus, better alternatives to significance testing exist. The reluctance to abandon this practice might be both preference of clinging to old habits as well as the unfamiliarity with better methods. Authors might question if using less commonly exercised, though superior, techniques will be well received by the editors, reviewers and the readership. A joint effort will be needed to abandon significance testing in clinical research in the future.

  15. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  16. Education and the risk for Alzheimer's disease: sex makes a difference. EURODEM pooled analyses. EURODEM Incidence Research Group.

    PubMed

    Letenneur, L; Launer, L J; Andersen, K; Dewey, M E; Ott, A; Copeland, J R; Dartigues, J F; Kragh-Sorensen, P; Baldereschi, M; Brayne, C; Lobo, A; Martinez-Lage, J M; Stijnen, T; Hofman, A

    2000-06-01

    The hypothesis that a low educational level increases the risk for Alzheimer's disease remains controversial. The authors studied the association of years of schooling with the risk for incident dementia and Alzheimer's disease by using pooled data from four European population-based follow-up studies. Dementia cases were identified in a two-stage procedure that included a detailed diagnostic assessment of screen-positive subjects. Dementia and Alzheimer's disease were diagnosed by using international research criteria. Educational level was categorized by years of schooling as low (< or =7), middle (8-11), or high (> or =12). Relative risks (95% confidence intervals) were estimated by using Poisson regression, adjusting for age, sex, study center, smoking status, and self-reported myocardial infarction and stroke. There were 493 (328) incident cases of dementia (Alzheimer's disease) and 28,061 (27,839) person-years of follow-up. Compared with women with a high level of education, those with low and middle levels of education had 4.3 (95% confidence interval: 1.5, 11.9) and 2.6 (95% confidence interval: 1.0, 7.1) times increased risks, respectively, for Alzheimer's disease. The risk estimates for men were close to 1.0. Finding an association of education with Alzheimer's disease for women only raises the possibility that unmeasured confounding explains the previously reported increased risk for Alzheimer's disease for persons with low levels of education.

  17. Longitudinal Associations of Smoke-Free Policies and Incident Cardiovascular Disease: CARDIA Study.

    PubMed

    Mayne, Stephanie L; Widome, Rachel; Carroll, Allison J; Schreiner, Pamela J; Gordon-Larsen, Penny; Jacobs, David R; Kershaw, Kiarri N

    2018-05-07

    Background -Smoke-free legislation has been associated with lower rates of cardiovascular disease hospital admissions in ecological studies. However, prior studies lacked detailed information on individual-level factors (eg, sociodemographic and clinical characteristics) that could potentially confound associations. Our objective was to estimate associations of smoke-free policies with incident cardiovascular disease in a longitudinal cohort after controlling for sociodemographics, cardiovascular disease risk factors, and policy covariates. Methods -Longitudinal data from 3783 black and white adults in the CARDIA study (Coronary Artery Risk Development in Young Adults; 1995-2015) were linked to state, county, and local 100% smoke-free policies in bars, restaurants, and nonhospitality workplaces by Census tract. Extended Cox regression estimated hazard ratios (HRs) of incident cardiovascular disease associated with time-dependent smoke-free policy exposures. Models were adjusted for sociodemographic characteristics, cardiovascular disease risk factors, state cigarette tax, participant-reported presence of a smoking ban at their workplace, field center, and metropolitan statistical area poverty. Results -During a median follow-up of 20 years (68 332 total personyears), 172 participants had an incident cardiovascular disease event (2.5 per 1000 person-years). Over the follow-up period, 80% of participants lived in areas with smoke-free policies in restaurants, 67% in bars, and 65% in nonhospitality workplaces. In fully adjusted models, participants living in an area with a restaurant, bar, or workplace smoke-free policy had a lower risk of incident cardiovascular disease compared with those in areas without smoke-free policies (HR, 0.75, 95% confidence interval, 0.49-1.15; HR, 0.76, 95% confidence interval, 0.47-1.24; HR, 0.54, 95% confidence interval, 0.34-0.86, respectively; HR, 0.58, 95% confidence interval, 0.33-1.00 for living in an area with all 3 types of policies compared with none). The estimated preventive fraction was 25% for restaurant policies, 24% for bar policies, and 46% for workplace policies. Conclusions -Consistent with prior ecological studies, these individual-based data add to the evidence that 100% smoke-free policies are associated with lower risk of cardiovascular disease among middle-aged adults.

  18. Intrauterine fetal death and risk of shoulder dystocia at delivery.

    PubMed

    Larsen, Sandra; Dobbin, Joanna; McCallion, Oliver; Eskild, Anne

    2016-12-01

    Vaginal delivery is recommended after intrauterine fetal death. However, little is known about the risk of shoulder dystocia in these deliveries. We studied whether intrauterine fetal death increases the risk of shoulder dystocia at delivery. In this population-based register study using the Medical Birth Registry of Norway, we included all singleton pregnancies with vaginal delivery of offspring in cephalic presentation in Norway during the period 1967-2012 (n = 2 266 118). Risk of shoulder dystocia was estimated as absolute risk (%) and odds ratio with 95% confidence interval. Adjustment was made for offspring birthweight (in grams). We performed sub-analyses within categories of birthweight (<4000 and ≥4000 g) and in pregnancies with maternal diabetes. Shoulder dystocia occurred in 1.1% of pregnancies with intrauterine fetal death and in 0.8% of pregnancies without intrauterine fetal death (p < 0.0001) (crude odds ratio 1.5, 95% confidence interval 1.2-4.9). After adjustment for birthweight, the odds ratio was 5.9 (95% confidence interval 4.7-7.4). In pregnancies with birthweight ≥4000 g, shoulder dystocia occurred in 14.6% of pregnancies with intrauterine fetal death and in 2.8% of pregnancies without intrauterine fetal death (p < 0.001) (crude odds ratio 5.9, 95% confidence interval 4.5-7.9). In pregnancies with birthweight ≥4000 g and concurrent maternal diabetes, shoulder dystocia occurred in 57.1% of pregnancies with intrauterine fetal death and 9.6% of pregnancies without intrauterine fetal death (p < 0.001) (crude odds ratio 12.6, 95% confidence interval 5.9-26.9). Intrauterine fetal death increased the risk of shoulder dystocia at delivery, and the absolute risk of shoulder dystocia was particularly high if offspring birthweight was high and the mother had diabetes. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  19. High definition versus standard definition white light endoscopy for detecting dysplasia in patients with Barrett's esophagus.

    PubMed

    Sami, S S; Subramanian, V; Butt, W M; Bejkar, G; Coleman, J; Mannath, J; Ragunath, K

    2015-01-01

    High-definition endoscopy systems provide superior image resolution. The aim of this study was to assess the utility of high definition compared with standard definition endoscopy system for detecting dysplastic lesions in patients with Barrett's esophagus. A retrospective cohort study of patients with non-dysplastic Barrett's esophagus undergoing routine surveillance was performed. Data were retrieved from the central hospital electronic database. Procedures performed for non-surveillance indications, Barrett's esophagus Prague C0M1 classification with no specialized intestinal metaplasia on histology, patients diagnosed with any dysplasia or cancer on index endoscopy, and procedures using advanced imaging techniques were excluded. Logistic regression models were constructed to estimate adjusted odds ratios and 95% confidence intervals comparing outcomes with standard definition and high-definition systems. The high definition was superior to standard definition system in targeted detection of all dysplastic lesions (odds ratio 3.27, 95% confidence interval 1.27-8.40) as well as overall dysplasia detected on both random and target biopsies (odds ratio 2.36, 95% confidence interval 1.50-3.72). More non-dysplastic lesions were detected with the high-definition system (odds ratio 1.16, 95% confidence interval 1.01-1.33). There was no difference between high definition and standard definition endoscopy in the overall (random and target) high-grade dysplasia or cancers detected (odds ratio 0.93, 95% confidence interval 0.83-1.04). Trainee endoscopists, number of biopsies taken, and male sex were all significantly associated with a higher yield for dysplastic lesions. The use of the high-definition endoscopy system is associated with better targeted detection of any dysplasia during routine Barrett's esophagus surveillance. However, high-definition endoscopy cannot replace random biopsies at present time. © 2014 International Society for Diseases of the Esophagus.

  20. Educational achievement among long-term survivors of congenital heart defects: a Danish population-based follow-up study.

    PubMed

    Olsen, Morten; Hjortdal, Vibeke E; Mortensen, Laust H; Christensen, Thomas D; Sørensen, Henrik T; Pedersen, Lars

    2011-04-01

    Congenital heart defect patients may experience neurodevelopmental impairment. We investigated their educational attainments from basic schooling to higher education. Using administrative databases, we identified all Danish patients with a cardiac defect diagnosis born from 1 January, 1977 to 1 January, 1991 and alive at age 13 years. As a comparison cohort, we randomly sampled 10 persons per patient. We obtained information on educational attainment from Denmark's Database for Labour Market Research. The study population was followed until achievement of educational levels, death, emigration, or 1 January, 2006. We estimated the hazard ratio of attaining given educational levels, conditional on completing preceding levels, using discrete-time Cox regression and adjusting for socio-economic factors. Analyses were repeated for a sub-cohort of patients and controls born at term and without extracardiac defects or chromosomal anomalies. We identified 2986 patients. Their probability of completing compulsory basic schooling was approximately 10% lower than that of control individuals (adjusted hazard ratio = 0.79, ranged from 0.75 to 0.82 0.79; 95% confidence interval: 0.75-0.82). Their subsequent probability of completing secondary school was lower than that of the controls, both for all patients (adjusted hazard ratio = 0.74; 95% confidence interval: 0.69-0.80) and for the sub-cohort (adjusted hazard ratio = 0.80; 95% confidence interval: 0.73-0.86). The probability of attaining a higher degree, conditional on completion of youth education, was affected both for all patients (adjusted hazard ratio = 0.88; 95% confidence interval: 0.76-1.01) and for the sub-cohort (adjusted hazard ratio = 0.92; 95% confidence interval: 0.79-1.07). The probability of educational attainment was reduced among long-term congenital heart defect survivors.

  1. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.

  2. Visualization and curve-parameter estimation strategies for efficient exploration of phenotype microarray kinetics.

    PubMed

    Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.

  3. Visualization and Curve-Parameter Estimation Strategies for Efficient Exploration of Phenotype Microarray Kinetics

    PubMed Central

    Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335

  4. Prevalence of tics in schoolchildren in central Spain: a population-based study.

    PubMed

    Cubo, Esther; Gabriel y Galán, José María Trejo; Villaverde, Vanesa Ausín; Velasco, Sara Sáez; Benito, Vanesa Delgado; Macarrón, Jesús Vicente; Guevara, José Cordero; Louis, Elan D; Benito-León, Julián

    2011-08-01

    Tic disorders constitute a neurodevelopmental disorder of childhood. This study sought to determine the prevalence of tic disorders in a school-based sample. A randomized sample of 1158 schoolchildren, based on clusters (classrooms) in the province of Burgos (Spain), was identified on a stratified sampling frame combining types of educational center and setting (mainstream schools and special education), using a two-phase approach (screening and diagnosis ascertainment by a neurologist). Tics with/without impairment criterion were diagnosed according to Diagnostic and Statistical Manual of Mental Disorders criteria. In mainstream schools, tics were observed in 125/741 students (16.86%; 95% confidence interval, 14.10-19.63), and were more frequent in boys (87/448, 19.42%; 95% confidence interval, 15.64-23.19) compared with girls (38/293, 12.96%; 95% confidence interval, 8.95-16.98; P = 0.03). In special education centers, tics disorders were observed in 11/54 of children (20.37%; 95% confidence interval, 8.70-32.03). Overall, tics with impairment criteria were less frequent than tics without impairment criteria (4.65% vs 11.85%, P < 0.0001). The most frequent diagnoses involved chronic motor tics (6.07%) and Tourette syndrome (5.26%). Tic disorders are common in childhood, and the use or nonuse of impairment criteria exerts a significant impact on tic prevalence estimates. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Estimating the Time Interval Between Exposure to the World Trade Center Disaster and Incident Diagnoses of Obstructive Airway Disease

    PubMed Central

    Glaser, Michelle S.; Webber, Mayris P.; Zeig-Owens, Rachel; Weakley, Jessica; Liu, Xiaoxue; Ye, Fen; Cohen, Hillel W.; Aldrich, Thomas K.; Kelly, Kerry J.; Nolan, Anna; Weiden, Michael D.; Prezant, David J.; Hall, Charles B.

    2014-01-01

    Respiratory disorders are associated with occupational and environmental exposures. The latency period between exposure and disease onset remains uncertain. The World Trade Center (WTC) disaster presents a unique opportunity to describe the latency period for obstructive airway disease (OAD) diagnoses. This prospective cohort study of New York City firefighters compared the timing and incidence of physician-diagnosed OAD relative to WTC exposure. Exposure was categorized by WTC arrival time as high (on the morning of September 11, 2001), moderate (after noon on September 11, 2001, or on September 12, 2001), or low (during September 13–24, 2001). We modeled relative rates and 95% confidence intervals of OAD incidence by exposure over the first 5 years after September 11, 2001, estimating the times of change in the relative rate with change point models. We observed a change point at 15 months after September 11, 2001. Before 15 months, the relative rate for the high- versus low-exposure group was 3.96 (95% confidence interval: 2.51, 6.26) and thereafter, it was 1.76 (95% confidence interval: 1.26, 2.46). Incident OAD was associated with WTC exposure for at least 5 years after September 11, 2001. There were higher rates of new-onset OAD among the high-exposure group during the first 15 months and, to a lesser extent, throughout follow-up. This difference in relative rate by exposure occurred despite full and free access to health care for all WTC-exposed firefighters, demonstrating the persistence of WTC-associated OAD risk. PMID:24980522

  6. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  7. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.

  8. Reducing aluminum: an occupation possibly associated with bladder cancer.

    PubMed Central

    Thériault, G; De Guire, L; Cordier, S

    1981-01-01

    A case-control study, undertaken to identify reasons for the exceptionally high incidence of bladder cancer among men in the Chicoutimi census division of the province of Quebec, revealed an increased risk associated with employment in the electrolysis department of an aluminum reduction plant. The estimated relative risk was 2.83 (95% confidence interval; 1.06 to 7.54). An interaction was found between such employment and cigarette smoking, resulting in a combined relative risk of 5.70 (95% confidence interval: 2.00 to 12.30). These findings suggest that employment in an aluminum reduction plant accounts for part of the excess of bladder cancer in the region studied. PMID:7214271

  9. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  10. Correct Effect Size Estimates for Strength of Association Statistics: Comment on Odgaard and Fowler (2010)

    ERIC Educational Resources Information Center

    Lerner, Matthew D.; Mikami, Amori Yee

    2013-01-01

    Odgaard and Fowler (2010) articulated the importance of reporting confidence intervals (CIs) on effect size estimates, and they provided useful formulas for doing so. However, one of their reported formulas, pertaining to the calculation of CIs on strength of association effect sizes (e.g., R[squared] or [eta][squared]), is erroneous. This comment…

  11. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    PubMed

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  12. Point and interval estimation of pollinator importance: a study using pollination data of Silene caroliniana.

    PubMed

    Reynolds, Richard J; Fenster, Charles B

    2008-05-01

    Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.

  13. Decline in lung function among cement production workers: a meta-analysis.

    PubMed

    Moghadam, Somayeh Rahimi; Abedi, Siavosh; Afshari, Mahdi; Abedini, Ehsan; Moosazadeh, Mahmood

    2017-12-20

    Several studies with different results have been performed regarding cement dust exposure and its pathogenic outcomes during the previous years. This study aims to combine these results to obtain a reliable estimate of the effect of exposure to cement dust. PubMed and other data banks were searched to identify required electronic articles. The search was extended interviewing with relevant experts and research centers. Point and pooled estimates of outcome with 95% confidence intervals were estimated. Participants were 5371 exposed and 2650 unexposed persons. Total mean differences (95% confidence intervals) were estimated as of -0.48 (-0.71 to -0.25) L for forced vital capacity (FVC), -0.7 (-0.92 to -0.47) L for forced expiratory volume in the first second (FEV1), -0.43 (-0.68 to -0.19) L for FEV1/FVC%, -0.73 (-1.15 to -0.30) L/min for PEFR and -0.36 (-0.51 to -0.21) L/s for FEF25-75. Our meta-analysis showed that cement dust has significant impact on lung function and reduces the indicators of FVC, FEV1, FEV1/FVC, PEFR and FEF25-75.

  14. Serum uric acid and cancer mortality and incidence: a systematic review and meta-analysis.

    PubMed

    Dovell, Frances; Boffetta, Paolo

    2018-07-01

    Elevated serum uric acid (SUA) is a marker of chronic inflammation and has been suggested to be associated with increased risk of cancer, but its antioxidant capacity would justify an anticancer effect. Previous meta-analyses did not include all available results. We conducted a systematic review of prospective studies on SUA level and risk of all cancers and specific cancers, a conducted a meta-analysis based on random-effects models for high versus low SUA level as well as for an increase in 1 mg/dl SUA. The relative risk of all cancers for high versus low SUA level was 1.11 (95% confidence interval: 0.94-1.27; 11 risk estimates); that for a mg/dl increase in SUA level was 1.03 (95% confidence interval: 0.99-1.07). Similar results were obtained for lung cancer (six risk estimates) and colon cancer (four risk estimates). Results for other cancers were sparse. Elevated SUA levels appear to be associated with a modest increase in overall cancer risk, although the combined risk estimate did not reach the formal level of statistical significance. Results for specific cancers were limited and mainly negative.

  15. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  16. Estimation of the size of the female sex worker population in Rwanda using three different methods

    PubMed Central

    Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin

    2014-01-01

    HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture–recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture–recapture method was 3205 (95% confidence interval: 2998–3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916–2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture–recapture, enumeration, and multiplier methods. The capture–recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. PMID:25336306

  17. Estimation of the size of the female sex worker population in Rwanda using three different methods.

    PubMed

    Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin

    2015-10-01

    HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. © The Author(s) 2015.

  18. An empirical Bayes approach to analyzing recurring animal surveys

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.

  19. Healthy Worker Survivor Bias in the Colorado Plateau Uranium Miners Cohort

    PubMed Central

    Keil, Alexander P.; Richardson, David B.; Troester, Melissa A.

    2015-01-01

    Cohort mortality studies of underground miners have been used to estimate the number of lung cancer deaths attributable to radon exposure. However, previous studies of the radon–lung cancer association among underground miners may have been subject to healthy worker survivor bias, a type of time-varying confounding by employment status. We examined radon-mortality associations in a study of 4,124 male uranium miners from the Colorado Plateau who were followed from 1950 through 2005. We estimated the time ratio (relative change in median survival time) per 100 working level months (radon exposure averaging 130,000 mega-electron volts of potential α energy per liter of air, per working month) using G-estimation of structural nested models. After controlling for healthy worker survivor bias, the time ratio for lung cancer per 100 working level months was 1.168 (95% confidence interval: 1.152, 1.174). In an unadjusted model, the estimate was 1.102 (95% confidence interval: 1.099, 1.112)—39% lower. Controlling for this bias, we estimated that among 617 lung cancer deaths, 6,071 person-years of life were lost due to occupational radon exposure during follow-up. Our analysis suggests that healthy worker survivor bias in miner cohort studies can be substantial, warranting reexamination of current estimates of radon's estimated impact on lung cancer mortality. PMID:25837305

  20. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  1. Automated semantic indexing of figure captions to improve radiology image retrieval.

    PubMed

    Kahn, Charles E; Rubin, Daniel L

    2009-01-01

    We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.

  2. Exposure to traffic-related air pollution during pregnancy and term low birth weight: estimation of causal associations in a semiparametric model.

    PubMed

    Padula, Amy M; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B

    2012-11-01

    Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000-2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants.

  3. Correcting for bias in the selection and validation of informative diagnostic tests.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2015-04-15

    When developing a new diagnostic test for a disease, there are often multiple candidate classifiers to choose from, and it is unclear if any will offer an improvement in performance compared with current technology. A two-stage design can be used to select a promising classifier (if one exists) in stage one for definitive validation in stage two. However, estimating the true properties of the chosen classifier is complicated by the first stage selection rules. In particular, the usual maximum likelihood estimator (MLE) that combines data from both stages will be biased high. Consequently, confidence intervals and p-values flowing from the MLE will also be incorrect. Building on the results of Pepe et al. (SIM 28:762-779), we derive the most efficient conditionally unbiased estimator and exact confidence intervals for a classifier's sensitivity in a two-stage design with arbitrary selection rules; the condition being that the trial proceeds to the validation stage. We apply our estimation strategy to data from a recent family history screening tool validation study by Walter et al. (BJGP 63:393-400) and are able to identify and successfully adjust for bias in the tool's estimated sensitivity to detect those at higher risk of breast cancer. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. Fall in hematocrit per 1000 parasites cleared from peripheral blood: a simple method for estimating drug-related fall in hematocrit after treatment of malaria infections.

    PubMed

    Gbotosho, Grace Olusola; Okuboyejo, Titilope; Happi, Christian Tientcha; Sowunmi, Akintunde

    2014-01-01

    A simple method to estimate antimalarial drug-related fall in hematocrit (FIH) after treatment of Plasmodium falciparum infections in the field is described. The method involves numeric estimation of the relative difference in hematocrit at baseline (pretreatment) and the first 1 or 2 days after treatment begun as numerator and the corresponding relative difference in parasitemia as the denominator, and expressing it per 1000 parasites cleared from peripheral blood. Using the method showed that FIH/1000 parasites cleared from peripheral blood (cpb) at 24 or 48 hours were similar in artemether-lumefantrine and artesunate-amodiaquine-treated children (0.09; 95% confidence interval, 0.052-0.138 vs 0.10; 95% confidence interval, 0.069-0.139%; P = 0.75) FIH/1000 parasites cpb in patients with higher parasitemias were significantly (P < 0.0001) and five- to 10-fold lower than in patients with lower parasitemias suggesting conservation of hematocrit or red cells in patients with higher parasitemias treated with artesunate-amodiaquine or artemether-lumefantrine. FIH/1000 parasites cpb were similar in anemic and nonanemic children. Estimation of FIH/1000 parasites cpb is simple, allows estimation of relatively conserved hematocrit during treatment, and can be used in both observational studies and clinical trials involving antimalarial drugs.

  5. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  6. Maternal and neonatal outcomes after bariatric surgery; a systematic review and meta-analysis: do the benefits outweigh the risks?

    PubMed

    Kwong, Wilson; Tomlinson, George; Feig, Denice S

    2018-02-15

    Obesity during pregnancy is associated with a number of adverse obstetric outcomes that include gestational diabetes mellitus, macrosomia, and preeclampsia. Increasing evidence shows that bariatric surgery may decrease the risk of these outcomes. Our aim was to evaluate the benefits and risks of bariatric surgery in obese women according to obstetric outcomes. We performed a systematic literature search using MEDLINE, Embase, Cochrane, Web of Science, and PubMed from inception up to December 12, 2016. Studies were included if they evaluated patients who underwent bariatric surgery, reported subsequent pregnancy outcomes, and compared these outcomes with a control group. Two reviewers extracted study outcomes independently, and risk of bias was assessed with the use of the Newcastle-Ottawa Quality Assessment Scale. Pooled odds ratios for each outcome were estimated with the Dersimonian and Laird random effects model. After a review of 2616 abstracts, 20 cohort studies and approximately 2.8 million subjects (8364 of whom had bariatric surgery) were included in the metaanalysis. In our primary analysis, patients who underwent bariatric surgery showed reduced rates of gestational diabetes mellitus (odds ratio, 0.20; 95% confidence interval, 0.11-0.37, number needed to benefit, 5), large-for-gestational-age infants (odds ratio, 0.31; 95% confidence interval, 0.17-0.59; number needed to benefit, 6), gestational hypertension (odds ratio, 0.38; 95% confidence interval, 0.19-0.76; number needed to benefit, 11), all hypertensive disorders (odds ratio, 0.38; 95% confidence interval, 0.27-0.53; number needed to benefit, 8), postpartum hemorrhage (odds ratio, 0.32; 95% confidence interval, 0.08-1.37; number needed to benefit, 21), and caesarean delivery rates (odds ratio, 0.50; 95% confidence interval, 0.38-0.67; number needed to benefit, 9); however, group of patients showed an increase in small-for-gestational-age infants (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 21), intrauterine growth restriction (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 66), and preterm deliveries (odds ratio, 1.35; 95% confidence interval, 1.02-1.79; number needed to harm, 35) when compared with control subjects who were matched for presurgery body mass index. There were no differences in rates of preeclampsia, neonatal intensive care unit admissions, stillbirths, malformations, and neonatal death. Malabsorptive surgeries resulted in a greater increase in small-for-gestational-age infants (P=.0466) and a greater decrease in large-for-gestational-age infants (P=<.0001) compared with restrictive surgeries. There were no differences in outcomes when we used administrative databases vs clinical charts. Although bariatric surgery is associated with a reduction in the risk of several adverse obstetric outcomes, there is a potential for an increased risk of other important outcomes that should be considered when bariatric surgery is discussed with reproductive-age women. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Inverse modeling with RZWQM2 to predict water quality

    USDA-ARS?s Scientific Manuscript database

    Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...

  8. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    PubMed

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Sample variance in weak lensing: How many simulations are required?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petri, Andrea; May, Morgan; Haiman, Zoltan

    Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension N b with its simulated counterpart. An accurate estimate of the N b × N b feature covariance matrix C is essential to obtain accurate parameter confidence intervals. When C is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of N r realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number N s ≤ N r of independent ray-tracing N-bodymore » simulations. We study parameter confidence intervals as a function of (N s, N r) in the range 1 ≤ N s ≤ 200 and 1 ≤ N r ≲ 105. Previous work [S. Dodelson and M. D. Schneider, Phys. Rev. D 88, 063537 (2013)] has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an O(1/N r) degradation of the parameter confidence intervals. Using a variety of lensing features measured in our simulations, including shear-shear power spectra and peak counts, we show that cubic and quartic covariance fluctuations lead to additional O(1/N 2 r) error degradation that is not negligible when N r is only a factor of few larger than N b. We study the large N r limit, and find that a single, 240 Mpc/h sized 512 3-particle N-body simulation (N s = 1) can be repeatedly recycled to produce as many as N r = few × 10 4 shear maps whose power spectra and high-significance peak counts can be treated as statistically independent. Lastly, a small number of simulations (N s = 1 or 2) is sufficient to forecast parameter confidence intervals at percent accuracy.« less

  10. Sample variance in weak lensing: How many simulations are required?

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltan

    2016-03-24

    Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension N b with its simulated counterpart. An accurate estimate of the N b × N b feature covariance matrix C is essential to obtain accurate parameter confidence intervals. When C is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of N r realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number N s ≤ N r of independent ray-tracing N-bodymore » simulations. We study parameter confidence intervals as a function of (N s, N r) in the range 1 ≤ N s ≤ 200 and 1 ≤ N r ≲ 105. Previous work [S. Dodelson and M. D. Schneider, Phys. Rev. D 88, 063537 (2013)] has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an O(1/N r) degradation of the parameter confidence intervals. Using a variety of lensing features measured in our simulations, including shear-shear power spectra and peak counts, we show that cubic and quartic covariance fluctuations lead to additional O(1/N 2 r) error degradation that is not negligible when N r is only a factor of few larger than N b. We study the large N r limit, and find that a single, 240 Mpc/h sized 512 3-particle N-body simulation (N s = 1) can be repeatedly recycled to produce as many as N r = few × 10 4 shear maps whose power spectra and high-significance peak counts can be treated as statistically independent. Lastly, a small number of simulations (N s = 1 or 2) is sufficient to forecast parameter confidence intervals at percent accuracy.« less

  11. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  12. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE PAGES

    Chylek, Petr; Augustine, John A.; Klett, James D.; ...

    2017-09-30

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  13. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chylek, Petr; Augustine, John A.; Klett, James D.

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  14. Comparing facility-level methane emission rate estimates at natural gas gathering and boosting stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughn, Timothy L.; Bell, Clay S.; Yacovitch, Tara I.

    Coordinated dual-tracer, aircraft-based, and direct component-level measurements were made at midstream natural gas gathering and boosting stations in the Fayetteville shale (Arkansas, USA). On-site component-level measurements were combined with engineering estimates to generate comprehensive facility-level methane emission rate estimates ('study on-site estimates (SOE)') comparable to tracer and aircraft measurements. Combustion slip (unburned fuel entrained in compressor engine exhaust), which was calculated based on 111 recent measurements of representative compressor engines, accounts for an estimated 75% of cumulative SOEs at gathering stations included in comparisons. Measured methane emissions from regenerator vents on glycol dehydrator units were substantially larger than predicted bymore » modelling software; the contribution of dehydrator regenerator vents to the cumulative SOE would increase from 1% to 10% if based on direct measurements. Concurrent measurements at 14 normally-operating facilities show relative agreement between tracer and SOE, but indicate that tracer measurements estimate lower emissions (regression of tracer to SOE = 0.91 (95% CI = 0.83-0.99), R2 = 0.89). Tracer and SOE 95% confidence intervals overlap at 11/14 facilities. Contemporaneous measurements at six facilities suggest that aircraft measurements estimate higher emissions than SOE. Aircraft and study on-site estimate 95% confidence intervals overlap at 3/6 facilities. The average facility level emission rate (FLER) estimated by tracer measurements in this study is 17-73% higher than a prior national study by Marchese et al.« less

  15. Comparing facility-level methane emission rate estimates at natural gas gathering and boosting stations

    DOE PAGES

    Vaughn, Timothy L.; Bell, Clay S.; Yacovitch, Tara I.; ...

    2017-02-09

    Coordinated dual-tracer, aircraft-based, and direct component-level measurements were made at midstream natural gas gathering and boosting stations in the Fayetteville shale (Arkansas, USA). On-site component-level measurements were combined with engineering estimates to generate comprehensive facility-level methane emission rate estimates ('study on-site estimates (SOE)') comparable to tracer and aircraft measurements. Combustion slip (unburned fuel entrained in compressor engine exhaust), which was calculated based on 111 recent measurements of representative compressor engines, accounts for an estimated 75% of cumulative SOEs at gathering stations included in comparisons. Measured methane emissions from regenerator vents on glycol dehydrator units were substantially larger than predicted bymore » modelling software; the contribution of dehydrator regenerator vents to the cumulative SOE would increase from 1% to 10% if based on direct measurements. Concurrent measurements at 14 normally-operating facilities show relative agreement between tracer and SOE, but indicate that tracer measurements estimate lower emissions (regression of tracer to SOE = 0.91 (95% CI = 0.83-0.99), R2 = 0.89). Tracer and SOE 95% confidence intervals overlap at 11/14 facilities. Contemporaneous measurements at six facilities suggest that aircraft measurements estimate higher emissions than SOE. Aircraft and study on-site estimate 95% confidence intervals overlap at 3/6 facilities. The average facility level emission rate (FLER) estimated by tracer measurements in this study is 17-73% higher than a prior national study by Marchese et al.« less

  16. CREATION OF A MODEL TO PREDICT SURVIVAL IN PATIENTS WITH REFRACTORY COELIAC DISEASE USING A MULTINATIONAL REGISTRY

    PubMed Central

    Rubio-Tapia, Alberto; Malamut, Georgia; Verbeek, Wieke H.M.; van Wanrooij, Roy L.J.; Leffler, Daniel A.; Niveloni, Sonia I.; Arguelles-Grande, Carolina; Lahr, Brian D.; Zinsmeister, Alan R.; Murray, Joseph A.; Kelly, Ciaran P.; Bai, Julio C.; Green, Peter H.; Daum, Severin; Mulder, Chris J.J.; Cellier, Christophe

    2016-01-01

    Background Refractory coeliac disease is a severe complication of coeliac disease with heterogeneous outcome. Aim To create a prognostic model to estimate survival of patients with refractory coeliac disease. Methods We evaluated predictors of 5-year mortality using Cox proportional hazards regression on subjects from a multinational registry. Bootstrap re-sampling was used to internally validate the individual factors and overall model performance. The mean of the estimated regression coefficients from 400 bootstrap models was used to derive a risk score for 5-year mortality. Results The multinational cohort was composed of 232 patients diagnosed with refractory coeliac disease across 7 centers (range of 11–63 cases per center). The median age was 53 years and 150 (64%) were women. A total of 51 subjects died during 5-year follow-up (cumulative 5-year all-cause mortality = 30%). From a multiple variable Cox proportional hazards model, the following variables were significantly associated with 5-year mortality: age at refractory coeliac disease diagnosis (per 20 year increase, hazard ratio = 2.21; 95% confidence interval: 1.38, 3.55), abnormal intraepithelial lymphocytes (hazard ratio = 2.85; 95% confidence interval: 1.22, 6.62), and albumin (per 0.5 unit increase, hazard ratio = 0.72; 95% confidence interval: 0.61, 0.85). A simple weighted 3-factor risk score was created to estimate 5-year survival. Conclusions Using data from a multinational registry and previously-reported risk factors, we create a prognostic model to predict 5-year mortality among patients with refractory coeliac disease. This new model may help clinicians to guide treatment and follow-up. PMID:27485029

  17. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    PubMed

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Population-based estimates of the prevalence of FMR1 expansion mutations in women with early menopause and primary ovarian insufficiency.

    PubMed

    Murray, Anna; Schoemaker, Minouk J; Bennett, Claire E; Ennis, Sarah; Macpherson, James N; Jones, Michael; Morris, Danielle H; Orr, Nick; Ashworth, Alan; Jacobs, Patricia A; Swerdlow, Anthony J

    2014-01-01

    Primary ovarian insufficiency before the age of 40 years affects 1% of the female population and is characterized by permanent cessation of menstruation. Genetic causes include FMR1 expansion mutations. Previous studies have estimated mutation prevalence in clinical referrals for primary ovarian insufficiency, but these are likely to be biased as compared with cases in the general population. The prevalence of FMR1 expansion mutations in early menopause (between the ages of 40 and 45 years) has not been published. We studied FMR1 CGG repeat number in more than 2,000 women from the Breakthrough Generations Study who underwent menopause before the age of 46 years. We determined the prevalence of premutation (55-200 CGG repeats) and intermediate (45-54 CGG repeats) alleles in women with primary ovarian insufficiency (n = 254) and early menopause (n = 1,881). The prevalence of the premutation was 2.0% in primary ovarian insufficiency, 0.7% in early menopause, and 0.4% in controls, corresponding to odds ratios of 5.4 (95% confidence interval = 1.7-17.4; P = 0.004) for primary ovarian insufficiency and 2.0 (95% confidence interval = 0.8-5.1; P = 0.12) for early menopause. Combining primary ovarian insufficiency and early menopause gave an odds ratio of 2.4 (95% confidence interval = 1.02-5.8; P = 0.04). Intermediate alleles were not significant risk factors for either early menopause or primary ovarian insufficiency. FMR1 premutations are not as prevalent in women with ovarian insufficiency as previous estimates have suggested, but they still represent a substantial cause of primary ovarian insufficiency and early menopause.

  19. Skin autofluorescence associates with vascular calcification in chronic kidney disease.

    PubMed

    Wang, Angela Yee-Moon; Wong, Chun-Kwok; Yau, Yat-Yin; Wong, Sharon; Chan, Iris Hiu-Shuen; Lam, Christopher Wai-Kei

    2014-08-01

    This study aims to evaluate the relationship between tissue advanced glycation end products, as reflected by skin autofluorescence, and vascular calcification in chronic kidney disease. Three hundred patients with stage 3 to 5 chronic kidney disease underwent multislice computed tomography to estimate total coronary artery calcium score (CACS) and had tissue advanced glycation end product assessed using a skin autofluorescence reader. Intact parathyroid hormone (P<0.001) displaced estimated glomerular filtration rate as third most significant factor associated with skin autofluorescence after age (P<0.001) and diabetes mellitus (P<0.001) in multiple regression analysis. On univariate multinomial logistic regression analysis, every 1-U increase in skin autofluorescence was associated with a 7.43-fold (95% confidence intervals, 3.59-15.37; P<0.001) increased odds of having CACS ≥400 compared with those with zero CACS. Skin autofluorescence retained significance in predicting CACS ≥400 (odds ratio, 3.63; 95% confidence intervals, 1.44-9.18; P=0.006) when adjusting for age, sex, serum calcium, phosphate, albumin, C-reactive protein, lipids, blood pressure, estimated glomerular filtration rate, and intact parathyroid hormone but marginally lost significance when additionally adjusting for diabetes mellitus (odds ratio, 2.23; 95% confidence intervals, 0.81-6.14; P=0.1). Combination of diabetes mellitus and higher intact parathyroid hormone was associated with greater skin autofluorescence and CACS versus those without diabetes mellitus and having lower intact parathyroid hormone. Tissue advanced glycation end product, as reflected by skin autofluorescence, showed a significant novel association with vascular calcification in chronic kidney disease. These data suggest that increased tissue advanced glycation end product may contribute to vascular calcification in chronic kidney disease and diabetes mellitus and warrant further experimental investigation. © 2014 American Heart Association, Inc.

  20. Incidence of skin tears in the extremities among elderly patients at a long-term medical facility in Japan: A prospective cohort study.

    PubMed

    Sanada, Hiromi; Nakagami, Gojiro; Koyano, Yuiko; Iizaka, Shinji; Sugama, Junko

    2015-08-01

    There is a lack of data from cohort studies for the incidence of skin tears among an elderly population in an Asian country. We estimated the cumulative incidence of skin tear, and identify its risk factor. The present prospective cohort study was carried out at a long-term medical facility in Japan. Participants included patients (n = 368) aged 65 years or older receiving hospital care. The 3-month cumulative incidence of skin tears was estimated by identifying them using direct inspection of the extremities. In order to find the risk factors for the skin tear incidence, odds ratios and their 95% confidence intervals of skin tear development in association with the factors were estimated using logistic regression analyses. A total of 14 patients developed skin tears, and their cumulative incidence was 3.8%. No patients with skin tears developed multiple wounds on their extremities. Half of the skin tears occurred on the outside of the right forearm, and just three skin tears were found in the lower legs. Multiple logistic analyses showed that pre-existing skin tears (odds ratio 15.42, 95% confidence interval 3.53-67.43, P < 0.001) and a 6-point decrease in the total score of the Braden Scale (odds ratio 0.10, 95% confidence interval 0.01-0.83, P < 0.033) were significantly associated with skin tear development. Patients with pre-existing skin tears and a low score of the Braden Scale have a higher risk of skin tear development during 3 months. These factors could be used to identify patients requiring prevention care for skin tears. © 2014 Japan Geriatrics Society.

  1. Economic Analysis of Anatomic Plating Versus Tubular Plating for the Treatment of Fibula Fractures.

    PubMed

    Chang, Gerard; Bhat, Suneel B; Raikin, Steven M; Kane, Justin M; Kay, Andrew; Ahmad, Jamal; Pedowitz, David I; Krieg, James

    2018-03-01

    Ankle fractures are among the most common injuries requiring operative management. Implant choices include one-third tubular plates and anatomically precontoured plates. Although cadaveric studies have not revealed biomechanical differences between various plate constructs, there are substantial cost differences. This study sought to characterize the economic implications of implant choice. A retrospective review was undertaken of 201 consecutive patients with operatively treated OTA type 44B and 44C ankles. A Nationwide Inpatient Sample query was performed to estimate the incidence of ankle fractures requiring fibular plating, and a Monte Carlo simulation was conducted with the estimated at-risk US population for associated plate-specific costs. The authors estimated an annual incidence of operatively treated ankle fractures in the United States of 59,029. The average cost was $90.86 (95% confidence interval, $90.84-$90.87) for a one-third tubular plate vs $746.97 (95% confidence interval, $746.55-$747.39) for an anatomic plate. Across the United States, use of only one-third tubular plating over anatomic plating would result in statistically significant savings of $38,729,517 (95% confidence interval, $38,704,773-$38,754,261; P<.0001). General use of one-third tubular plating instead of anatomic plating whenever possible for fibula fractures could result in cost savings of up to nearly $40 million annually in the United States. Unless clinically justifiable on a per-case basis, or until the advent of studies showing substantial clinical benefit, there currently is no reason for the increased expense from widespread use of anatomic plating for fractures amenable to one-third tubular plating. [Orthopedics. 2018; 41(2):e252-e256.]. Copyright 2018, SLACK Incorporated.

  2. Pubic Hair Grooming Injuries Presenting to U.S. Emergency Departments

    PubMed Central

    Glass, Allison S.; Bagga, Herman S.; Tasian, Gregory E.; Fisher, Patrick B.; McCulloch, Charles E.; Blaschko, Sarah D.; McAninch, Jack W.; Breyer, Benjamin N.

    2013-01-01

    OBJECTIVE To describe the demographics and mechanism of genitourinary (GU) injuries related to pubic hair grooming in patients who present to U.S. emergency departments (EDs). MATERIALS AND METHODS The National Electronic Injury Surveillance System contains prospectively collected data from patients who present to EDs with consumer product-related injuries. The National Electronic Injury Surveillance System is a stratified probability sample, validated to provide national estimates of all patients who present to U.S. EDs with an injury. We reviewed the National Electronic Injury Surveillance System to identify incidents of GU injury related to pubic hair grooming for 2002–2010. The variables reviewed included age, race, gender, injury type, location (organ) of injury, hospital disposition, and grooming product. RESULTS From 2002 to 2010, an observed 335 actual ED visits for GU injury related to grooming products provided an estimated 11,704 incidents (95% confidence interval 8430–15,004). The number of incidents increased fivefold during that period, amounting to an estimated increase of 247 incidents annually (95% confidence interval 110–384, P = .001). Of the cohort, 56.7% were women. The mean age was 30.8 years (95% confidence interval 28.8–32.9). Shaving razors were implicated in 83% of the injuries. Laceration was the most common type of injury (36.6%). The most common site of injury was the external female genitalia (36.0%). Most injuries (97.3%) were treated within the ED, with subsequent patient discharge. CONCLUSION Most GU injuries that result from the use of grooming products are minor and involve the use of razors. The demographics of patients with GU injuries from grooming products largely paralleled observations about cultural grooming trends in the United States. PMID:23040729

  3. Pubic hair grooming injuries presenting to U.S. emergency departments.

    PubMed

    Glass, Allison S; Bagga, Herman S; Tasian, Gregory E; Fisher, Patrick B; McCulloch, Charles E; Blaschko, Sarah D; McAninch, Jack W; Breyer, Benjamin N

    2012-12-01

    To describe the demographics and mechanism of genitourinary (GU) injuries related to pubic hair grooming in patients who present to U.S. emergency departments (EDs). The National Electronic Injury Surveillance System contains prospectively collected data from patients who present to EDs with consumer product-related injuries. The National Electronic Injury Surveillance System is a stratified probability sample, validated to provide national estimates of all patients who present to U.S. EDs with an injury. We reviewed the National Electronic Injury Surveillance System to identify incidents of GU injury related to pubic hair grooming for 2002-2010. The variables reviewed included age, race, gender, injury type, location (organ) of injury, hospital disposition, and grooming product. From 2002 to 2010, an observed 335 actual ED visits for GU injury related to grooming products provided an estimated 11,704 incidents (95% confidence interval 8430-15,004). The number of incidents increased fivefold during that period, amounting to an estimated increase of 247 incidents annually (95% confidence interval 110-384, P = .001). Of the cohort, 56.7% were women. The mean age was 30.8 years (95% confidence interval 28.8-32.9). Shaving razors were implicated in 83% of the injuries. Laceration was the most common type of injury (36.6%). The most common site of injury was the external female genitalia (36.0%). Most injuries (97.3%) were treated within the ED, with subsequent patient discharge. Most GU injuries that result from the use of grooming products are minor and involve the use of razors. The demographics of patients with GU injuries from grooming products largely paralleled observations about cultural grooming trends in the United States. Published by Elsevier Inc.

  4. Provider use of a participatory decision-making style with youth and caregivers and satisfaction with pediatric asthma visits.

    PubMed

    Sleath, Betsy; Carpenter, Delesha M; Coyne, Imelda; Davis, Scott A; Hayes Watson, Claire; Loughlin, Ceila E; Garcia, Nacire; Reuland, Daniel S; Tudor, Gail E

    2018-01-01

    We conducted a randomized controlled trial to test the effectiveness of an asthma question prompt list with video intervention to engage the youth during clinic visits. We examined whether the intervention was associated with 1) providers including youth and caregiver inputs more into asthma treatment regimens, 2) youth and caregivers rating providers as using more of a participatory decision-making style, and 3) youth and caregivers being more satisfied with visits. English- or Spanish-speaking youth aged 11-17 years with persistent asthma and their caregivers were recruited from four pediatric clinics and randomized to the intervention or usual care groups. The youth in the intervention group watched the video with their caregivers on an iPad and completed a one-page asthma question prompt list before their clinic visits. All visits were audiotaped. Generalized estimating equations were used to analyze the data. Forty providers and their patients (n=359) participated in this study. Providers included youth input into the asthma management treatment regimens during 2.5% of visits and caregiver input during 3.3% of visits. The youth in the intervention group were significantly more likely to rate their providers as using more of a participatory decision-making style (odds ratio=1.7, 95% confidence interval=1.1, 2.5). White caregivers were significantly more likely to rate the providers as more participatory (odds ratio=2.3, 95% confidence interval=1.2, 4.4). Youth (beta=4.9, 95% confidence interval=3.3, 6.5) and caregivers (beta=7.5, 95% confidence interval=3.1, 12.0) who rated their providers as being more participatory were significantly more satisfied with their visits. Youth (beta=-1.9, 95% confidence interval=-3.4, -0.4) and caregivers (beta=-8.8, 95% confidence interval=-16.2, -1.3) who spoke Spanish at home were less satisfied with visits. The intervention did not increase the inclusion of youth and caregiver inputs into asthma treatment regimens. However, it did increase the youth's perception of participatory decision-making style of the providers, and this in turn was associated with greater satisfaction.

  5. Association of CKD with Outcomes Among Patients Undergoing Transcatheter Aortic Valve Implantation

    PubMed Central

    Kaier, Klaus; Kaleschke, Gerrit; Gebauer, Katrin; Meyborg, Matthias; Malyar, Nasser M.; Freisinger, Eva; Baumgartner, Helmut; Reinecke, Holger; Reinöhl, Jochen

    2017-01-01

    Background and objectives Despitethe multiple depicted associations of CKD with reduced cardiovascular and overall prognoses, the association of CKD with outcome of patients undergoing transcatheter aortic valve implantation has still not been well described. Design, setting, participants, & measurements Data from all hospitalized patients who underwent transcatheter aortic valve implantation procedures between January 1, 2010 and December 31, 2013 in Germany were evaluated regarding influence of CKD, even in the earlier stages, on morbidity, in-hospital outcomes, and costs. Results A total of 28,716 patients were treated with transcatheter aortic valve implantation. A total of 11,189 (39.0%) suffered from CKD. Patients with CKD were predominantly women; had higher rates of comorbidities, such as coronary artery disease, heart failure at New York Heart Association 3/4, peripheral artery disease, and diabetes; and had a 1.3-fold higher estimated logistic European System for Cardiac Operative Risk Evaluation value. In-hospital mortality was independently associated with CKD stage ≥3 (up to odds ratio, 1.71; 95% confidence interval, 1.35 to 2.17; P<0.05), bleeding was independently associated with CKD stage ≥4 (up to odds ratio, 1.82; 95% confidence interval, 1.47 to 2.24; P<0.001), and AKI was independently associated with CKD stages 3 (odds ratio, 1.83; 95% confidence interval, 1.62 to 2.06) and 4 (odds ratio, 2.33; 95% confidence interval, 1.92 to 2.83 both P<0.001). The stroke risk, in contrast, was lower for patients with CKD stages 4 (odds ratio, 0.23; 95% confidence interval, 0.16 to 0.33) and 5 (odds ratio, 0.24; 95% confidence interval, 0.15 to 0.39; both P<0.001). Lengths of hospital stay were, on average, 1.2-fold longer, whereas reimbursements were, on average, only 1.03-fold higher in patients who suffered from CKD. Conclusions This analysis illustrates for the first time on a nationwide basis the association of CKD with adverse outcomes in patients who underwent transcatheter aortic valve implantation. Thus, classification of CKD stages before transcatheter aortic valve implantation is important for appropriate risk stratification. PMID:28289067

  6. Association of CKD with Outcomes Among Patients Undergoing Transcatheter Aortic Valve Implantation.

    PubMed

    Lüders, Florian; Kaier, Klaus; Kaleschke, Gerrit; Gebauer, Katrin; Meyborg, Matthias; Malyar, Nasser M; Freisinger, Eva; Baumgartner, Helmut; Reinecke, Holger; Reinöhl, Jochen

    2017-05-08

    Despitethe multiple depicted associations of CKD with reduced cardiovascular and overall prognoses, the association of CKD with outcome of patients undergoing transcatheter aortic valve implantation has still not been well described. Data from all hospitalized patients who underwent transcatheter aortic valve implantation procedures between January 1, 2010 and December 31, 2013 in Germany were evaluated regarding influence of CKD, even in the earlier stages, on morbidity, in-hospital outcomes, and costs. A total of 28,716 patients were treated with transcatheter aortic valve implantation. A total of 11,189 (39.0%) suffered from CKD. Patients with CKD were predominantly women; had higher rates of comorbidities, such as coronary artery disease, heart failure at New York Heart Association 3/4, peripheral artery disease, and diabetes; and had a 1.3-fold higher estimated logistic European System for Cardiac Operative Risk Evaluation value. In-hospital mortality was independently associated with CKD stage ≥3 (up to odds ratio, 1.71; 95% confidence interval, 1.35 to 2.17; P <0.05), bleeding was independently associated with CKD stage ≥4 (up to odds ratio, 1.82; 95% confidence interval, 1.47 to 2.24; P <0.001), and AKI was independently associated with CKD stages 3 (odds ratio, 1.83; 95% confidence interval, 1.62 to 2.06) and 4 (odds ratio, 2.33; 95% confidence interval, 1.92 to 2.83 both P <0.001). The stroke risk, in contrast, was lower for patients with CKD stages 4 (odds ratio, 0.23; 95% confidence interval, 0.16 to 0.33) and 5 (odds ratio, 0.24; 95% confidence interval, 0.15 to 0.39; both P <0.001). Lengths of hospital stay were, on average, 1.2-fold longer, whereas reimbursements were, on average, only 1.03-fold higher in patients who suffered from CKD. This analysis illustrates for the first time on a nationwide basis the association of CKD with adverse outcomes in patients who underwent transcatheter aortic valve implantation. Thus, classification of CKD stages before transcatheter aortic valve implantation is important for appropriate risk stratification. Copyright © 2017 by the American Society of Nephrology.

  7. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    2001-10-01

    - The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  9. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  10. Systemic antifungal therapy for tinea capitis in children: An abridged Cochrane Review.

    PubMed

    Chen, Xiaomei; Jiang, Xia; Yang, Ming; Bennett, Cathy; González, Urbà; Lin, Xiufang; Hua, Xia; Xue, Siliang; Zhang, Min

    2017-02-01

    The comparative efficacy and safety profiles of systemic antifungal drugs for tinea capitis in children remain unclear. We sought to assess the effects of systemic antifungal drugs for tinea capitis in children. We used standard Cochrane methodological procedures. We included 25 randomized controlled trials with 4449 participants. Terbinafine and griseofulvin had similar effects for children with mixed Trichophyton and Microsporum infections (risk ratio 1.08, 95% confidence interval 0.94-1.24). Terbinafine was better than griseofulvin for complete cure of T tonsurans infections (risk ratio 1.47, 95% confidence interval 1.22-1.77); griseofulvin was better than terbinafine for complete cure of infections caused solely by Microsporum species (risk ratio 0.68, 95% confidence interval 0.53-0.86). Compared with griseofulvin or terbinafine, itraconazole and fluconazole had similar effects against Trichophyton infections. All included studies were at unclear or high risk of bias. Lower quality evidence resulted in a lower confidence in the estimate of effect. Significant clinical heterogeneity existed across studies. Griseofulvin or terbinafine are both effective; terbinafine is more effective for T tonsurans and griseofulvin for M canis infections. Itraconazole and fluconazole are alternative but not optimal choices for Trichophyton infections. Optimal regimens of antifungal agents need further studies. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  11. Advances in the meta-analysis of heterogeneous clinical trials II: The quality effects model.

    PubMed

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines the performance of the updated quality effects (QE) estimator for meta-analysis of heterogeneous studies. It is shown that this approach leads to a decreased mean squared error (MSE) of the estimator while maintaining the nominal level of coverage probability of the confidence interval. Extensive simulation studies confirm that this approach leads to the maintenance of the correct coverage probability of the confidence interval, regardless of the level of heterogeneity, as well as a lower observed variance compared to the random effects (RE) model. The QE model is robust to subjectivity in quality assessment down to completely random entry, in which case its MSE equals that of the RE estimator. When the proposed QE method is applied to a meta-analysis of magnesium for myocardial infarction data, the pooled mortality odds ratio (OR) becomes 0.81 (95% CI 0.61-1.08) which favors the larger studies but also reflects the increased uncertainty around the pooled estimate. In comparison, under the RE model, the pooled mortality OR is 0.71 (95% CI 0.57-0.89) which is less conservative than that of the QE results. The new estimation method has been implemented into the free meta-analysis software MetaXL which allows comparison of alternative estimators and can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Estimation of the Standardized Risk Difference and Ratio in a Competing Risks Framework: Application to Injection Drug Use and Progression to AIDS After Initiation of Antiretroviral Therapy

    PubMed Central

    Cole, Stephen R.; Lau, Bryan; Eron, Joseph J.; Brookhart, M. Alan; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.; Cole, Stephen R.; Brookhart, M. Alan; Lau, Bryan; Eron, Joseph J.; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.

    2015-01-01

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. PMID:24966220

  13. Can insertion length for a double-lumen endobronchial tube be predicted?

    PubMed

    Dyer, R A; Heijke, S A; Russell, W J; Bloch, M B; James, M F

    2000-12-01

    It has been suggested that the appropriate length of insertion for double-lumen tubes can be estimated by external measurement. This study examined the accuracy of external measurement in estimating the actual length of insertion required in 130 patients. It also examined the relationship between the length inserted and the patient's height in 126 patients and their weight in 125 patients. Although there was a fair correlation between the measured external length and the final inserted length (r = 0.61), the 95% confidence intervals of slope and intercept allowed a large variation and the prediction was too wide to be clinically useful. Height was reasonably well correlated with the final length (r = 0.51) but an equally wide 95% confidence interval rendered it of little clinical value. There was no correlation between weight and final tube length. It is concluded that external measurement alone is not adequate to predict a clinically acceptable position of the double-lumen tube.

  14. Can 3-dimensional power Doppler indices improve the prenatal diagnosis of a potentially morbidly adherent placenta in patients with placenta previa?

    PubMed

    Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J

    2017-08-01

    Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Characterization and Uncertainty Analysis of a Reference Pressure Measurement System for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley

    2004-01-01

    This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.

  16. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  17. Age and fecundability in a North American preconception cohort study.

    PubMed

    Wesselink, Amelia K; Rothman, Kenneth J; Hatch, Elizabeth E; Mikkelsen, Ellen M; Sørensen, Henrik T; Wise, Lauren A

    2017-12-01

    There is a well-documented decline in fertility treatment success with increasing female age; however, there are few preconception cohort studies that have examined female age and natural fertility. In addition, data on male age and fertility are inconsistent. Given the increasing number of couples who are attempting conception at older ages, a more detailed characterization of age-related fecundability in the general population is of great clinical utility. The purpose of this study was to examine the association between female and male age with fecundability. We conducted a web-based preconception cohort study of pregnancy planners from the United States and Canada. Participants were enrolled between June 2013 and July 2017. Eligible participants were 21-45 years old (female) or ≥21 years old (male) and had not been using fertility treatments. Couples were followed until pregnancy or for up to 12 menstrual cycles. We analyzed data from 2962 couples who had been trying to conceive for ≤3 cycles at study entry and reported no history of infertility. We used life-table methods to estimate the unadjusted cumulative pregnancy proportion at 6 and 12 cycles by female and male age. We used proportional probabilities regression models to estimate fecundability ratios, the per-cycle probability of conception for each age category relative to the referent (21-24 years old), and 95% confidence intervals. Among female patients, the unadjusted cumulative pregnancy proportion at 6 cycles of attempt time ranged from 62.0% (age 28-30 years) to 27.6% (age 40-45 years); the cumulative pregnancy proportion at 12 cycles of attempt time ranged from 79.3% (age 25-27 years old) to 55.5% (age 40-45 years old). Similar patterns were observed among male patients, although differences between age groups were smaller. After adjusting for potential confounders, we observed a nearly monotonic decline in fecundability with increasing female age, with the exception of 28-33 years, at which point fecundability was relatively stable. Fecundability ratios were 0.91 (95% confidence interval, 0.74-1.11) for ages 25-27, 0.88 (95% confidence interval, 0.72-1.08) for ages 28-30, 0.87 (95% confidence interval, 0.70-1.08) for ages 31-33, 0.82 (95% confidence interval, 0.64-1.05) for ages 34-36, 0.60 (95% confidence interval, 0.44-0.81) for ages 37-39, and 0.40 (95% confidence interval, 0.22-0.73) for ages 40-45, compared with the reference group (age, 21-24 years). The association was stronger among nulligravid women. Male age was not associated appreciably with fecundability after adjustment for female age, although the number of men >45 years old was small (n=37). In this preconception cohort study of North American pregnancy planners, increasing female age was associated with an approximately linear decline in fecundability. Although we found little association between male age and fecundability, the small number of men in our study >45 years old limited our ability to draw conclusions on fecundability in older men. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  19. Identifying types of domestic violence and its associated risk factors in a pregnant population in Kerman hospitals, Iran Republic.

    PubMed

    Salari, Zohreh; Nakhaee, Nouzar

    2008-01-01

    The objective of this study was to estimate the prevalence of different kinds of physical and emotional violence in an Iranian pregnant population and to examine its associated risk factors. This cross-sectional study was done from March through July 2005 in the 4 main hospitals of Kerman, Iran, which had maternity units. In total, 416 out of 460 women who were asked to participate agreed to be interviewed, a 90.4% response rate. All respondents were interviewed privately during the first 48 hours after delivery. The mean age (+/- SD) was 28.0 +/- 5.6, and all were married. Most of the women were urban residents (89.2%), and the majority of them were multiparous (78.8%). Nearly 16% of mothers said the pregnancies were unintended. In total, 35% (95% confidence interval: 30%-40%) of women had experienced 1 or more episodes of emotional violence during the pregnancy inflicted by their husbands, and 106 women (25%; 95% confidence interval: 21%-30%) had experienced at least 1 episode of physical violence. The highest odds of domestic violence during pregnancy was associated with unintended pregnancies (odds ratio: 7.66; 95% confidence interval: 3.45-16.99) and multiparous pregnancies (odds ratio: 6.88; 95% confidence interval: 3.46-13.68). Considering the high prevalence of different types of domestic violence during pregnancy, it should be regarded as a priority for health policy experts in Kerman and possibly Iran.

  20. Each procedure matters: threshold for surgeon volume to minimize complications and decrease cost associated with adrenalectomy.

    PubMed

    Anderson, Kevin L; Thomas, Samantha M; Adam, Mohamed A; Pontius, Lauren N; Stang, Michael T; Scheri, Randall P; Roman, Sanziana A; Sosa, Julie A

    2018-01-01

    An association has been suggested between increasing surgeon volume and improved patient outcomes, but a threshold has not been defined for what constitutes a "high-volume" adrenal surgeon. Adult patients who underwent adrenalectomy by an identifiable surgeon between 1998-2009 were selected from the Healthcare Cost and Utilization Project National Inpatient Sample. Logistic regression modeling with restricted cubic splines was utilized to estimate the association between annual surgeon volume and complication rates in order to identify a volume threshold. A total of 3,496 surgeons performed adrenalectomies on 6,712 patients; median annual surgeon volume was 1 case. After adjustment, the likelihood of experiencing a complication decreased with increasing annual surgeon volume up to 5.6 cases (95% confidence interval, 3.27-5.96). After adjustment, patients undergoing resection by low-volume surgeons (<6 cases/year) were more likely to experience complications (odds ratio 1.71, 95% confidence interval, 1.27-2.31, P = .005), have a greater hospital stay (relative risk 1.46, 95% confidence interval, 1.25-1.70, P = .003), and at increased cost (+26.2%, 95% confidence interval, 12.6-39.9, P = .02). This study suggests that an annual threshold of surgeon volume (≥6 cases/year) that is associated with improved patient outcomes and decreased hospital cost. This volume threshold has implications for quality improvement, surgical referral and reimbursement, and surgical training. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Pigmentation Traits, Sun Exposure, and Risk of Incident Vitiligo in Women.

    PubMed

    Dunlap, Rachel; Wu, Shaowei; Wilmer, Erin; Cho, Eunyoung; Li, Wen-Qing; Lajevardi, Newsha; Qureshi, Abrar

    2017-06-01

    Vitiligo is the most common cutaneous depigmentation disorder worldwide, yet little is known about specific risk factors for disease development. Using data from the Nurses' Health Study, a prospective cohort study of 51,337 white women, we examined the associations between (i) pigmentary traits and (ii) reactions to sun exposure and risk of incident vitiligo. Nurses' Health Study participants responded to a question about clinician-diagnosed vitiligo and year of diagnosis (2001 or before, 2002-2005, 2006-2009, 2010-2011, or 2012+). We used Cox proportional hazards regression models to estimate the multivariate-adjusted hazard ratios and 95% confidence intervals of incident vitiligo associated with exposures variables, adjusting for potential confounders. We documented 271 cases of incident vitiligo over 835,594 person-years. Vitiligo risk was higher in women who had at least one mole larger than 3 mm in diameter on their left arms (hazard ratio = 1.37, 95% confidence interval = 1.02-1.83). Additionally, vitiligo risk was higher among women with better tanning ability (hazard ratio = 2.59, 95% confidence interval = 1.21-5.54) and in women who experienced at least one blistering sunburn (hazard ratio = 2.17, 95% confidence interval = 1.15-4.10). In this study, upper extremity moles, a higher ability to achieve a tan, and history of a blistering sunburn were associated with a higher risk of developing vitiligo in a population of white women. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Local setting influences the quantity of household food waste in mid-sized South African towns.

    PubMed

    Chakona, Gamuchirai; Shackleton, Charlie M

    2017-01-01

    The world faces a food security challenge with approximately 868 million people undernourished and about two billion people suffering from the negative health consequences of micronutrient deficiencies. Yet, it is believed that at least 33% of food produced for human consumption is lost or wasted along the food chain. As food waste has a negative effect on food security, the present study sought to quantify household food waste along the rural-urban continuum in three South African mid-sized towns situated along an agro-ecological gradient. We quantified the types of foods and drinks that households threw away in the previous 48 hours and identified the causes of household food waste in the three sites. More households wasted prepared food (27%) than unprepared food (15%) and drinks (8%). However, households threw away greater quantities of unprepared food in the 48-hour recall period (268.6±610.1 g, 90% confidence interval: 175.5 to 361.7 g) compared to prepared food (121.0±132.4 g, 90% confidence interval: 100.8 to 141.3 g) and drinks (77.0±192.5 ml, 90% confidence interval: 47.7 to 106.4 ml). The estimated per capita food waste (5-10 kg of unprepared food waste, 3-4 kg of prepared food waste and 1-3 litres of drinks waste per person per year) overlaps with that estimated for other developing countries, but lower than most developed countries. However, the estimated average amount of food waste per person per year for this study (12.35 kg) was higher relative to that estimated for developing countries (8.5 kg per person per year). Household food waste was mainly a result of consumer behavior concerning food preparation and storage. Integrated approaches are required to address this developmental issue affecting South African societies, which include promoting sound food management to decrease household food waste. Also, increased awareness and educational campaigns for household food waste reduction interventions are discussed.

  3. Is motivation influenced by geomagnetic activity?

    PubMed

    Starbuck, S; Cornélissen, G; Halberg, F

    2002-01-01

    To eventually build a scientific bridge to religion by examining whether non-photic, non-thermic solar effects may influence (religious) motivation, invaluable yearly world wide data on activities from 1950 to 1999 by Jehovah's Witnesses on behalf of their church were analyzed chronobiologically. The time structure (chronome) of these archives, insofar as it is able to be evaluated in yearly means for up to half a century, was assessed. Least squares spectra in a frequency range from one cycle in 42 to one in 2.1 years of data on the average number of hours per month spent in work for the church, available from 103 different geographic locations, as well as grand totals also including other sites, revealed a large peak at one cycle in about 21 years. The non-linear least squares fit of a model consisting of a linear trend and a cosine curve with a trial period of 21.0 years, numerically approximating that of the Hale cycle, validated the about 21.0-year component in about 70% of the data series, with the non-overlap of zero by the 95% confidence interval of the amplitude estimate. Estimates of MESOR (midline-estimating statistic of rhythm, a rhythm (or chronome) adjusted mean), amplitude and period were further regressed with geomagnetic latitude. The period estimate did not depend on geomagnetic latitude. The about 21.0-year amplitude tends to be larger at low and middle than at higher latitudes and the resolution of the about 21.0-year cycle, gauged by the width of 95% confidence intervals for the period and amplitude, is higher (the 95% confidence intervals are statistically significantly smaller) at higher than at lower latitudes. Near-matches of periods in solar activity and human motivation hint that the former may influence the latter, while the dependence on latitude constitutes evidence that geomagnetic activity may affect certain brain areas involved in motivation, just as it was earlier found that it is associated with effects on the electrocardiogram and anthropometry.

  4. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  5. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.

  6. Effect of High Intensity Interval Training on Cardiac Function in Children with Obesity: A Randomised Controlled Trial.

    PubMed

    Ingul, Charlotte B; Dias, Katrin A; Tjonna, Arnt E; Follestad, Turid; Hosseini, Mansoureh S; Timilsina, Anita S; Hollekim-Strand, Siri M; Ro, Torstein B; Davies, Peter S W; Cain, Peter A; Leong, Gary M; Coombes, Jeff S

    2018-02-13

    High intensity interval training (HIIT) confers superior cardiovascular health benefits to moderate intensity continuous training (MICT) in adults and may be efficacious for improving diminished cardiac function in obese children. The aim of this study was to compare the effects of HIIT, MICT and nutrition advice interventions on resting left ventricular (LV) peak systolic tissue velocity (S') in obese children. Ninety-nine obese children were randomised into one of three 12-week interventions, 1) HIIT [n = 33, 4 × 4 min bouts at 85-95% maximum heart rate (HR max ), 3 times/week] and nutrition advice, 2) MICT [n = 32, 44 min at 60-70% HR max , 3 times/week] and nutrition advice, and 3) nutrition advice only (nutrition) [n = 34]. Twelve weeks of HIIT and MICT were equally efficacious, but superior to nutrition, for normalising resting LV S' in children with obesity (estimated mean difference 1.0 cm/s, 95% confidence interval 0.5 to 1.6 cm/s, P < 0.001; estimated mean difference 0.7 cm/s, 95% confidence interval 0.2 to 1.3 cm/s, P = 0.010, respectively). Twelve weeks of HIIT and MICT were superior to nutrition advice only for improving resting LV systolic function in obese children. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  8. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  9. Pulmonary disease in cystic fibrosis: assessment with chest CT at chest radiography dose levels.

    PubMed

    Ernst, Caroline W; Basten, Ines A; Ilsen, Bart; Buls, Nico; Van Gompel, Gert; De Wachter, Elke; Nieboer, Koenraad H; Verhelle, Filip; Malfroot, Anne; Coomans, Danny; De Maeseneer, Michel; de Mey, Johan

    2014-11-01

    To investigate a computed tomographic (CT) protocol with iterative reconstruction at conventional radiography dose levels for the assessment of structural lung abnormalities in patients with cystic fibrosis ( CF cystic fibrosis ). In this institutional review board-approved study, 38 patients with CF cystic fibrosis (age range, 6-58 years; 21 patients <18 years and 17 patients >18 years) underwent investigative CT (at minimal exposure settings combined with iterative reconstruction) as a replacement of yearly follow-up posteroanterior chest radiography. Verbal informed consent was obtained from all patients or their parents. CT images were randomized and rated independently by two radiologists with use of the Bhalla scoring system. In addition, mosaic perfusion was evaluated. As reference, the previous available conventional chest CT scan was used. Differences in Bhalla scores were assessed with the χ(2) test and intraclass correlation coefficients ( ICC intraclass correlation coefficient s). Radiation doses for CT and radiography were assessed for adults (>18 years) and children (<18 years) separately by using technical dose descriptors and estimated effective dose. Differences in dose were assessed with the Mann-Whitney U test. The median effective dose for the investigative protocol was 0.04 mSv (95% confidence interval [ CI confidence interval ]: 0.034 mSv, 0.10 mSv) for children and 0.05 mSv (95% CI confidence interval : 0.04 mSv, 0.08 mSv) for adults. These doses were much lower than those with conventional CT (median: 0.52 mSv [95% CI confidence interval : 0.31 mSv, 3.90 mSv] for children and 1.12 mSv [95% CI confidence interval : 0.57 mSv, 3.15 mSv] for adults) and of the same order of magnitude as those for conventional radiography (median: 0.012 mSv [95% CI confidence interval : 0.006 mSv, 0.022 mSv] for children and 0.012 mSv [95% CI confidence interval : 0.005 mSv, 0.031 mSv] for adults). All images were rated at least as diagnostically acceptable. Very good agreement was found in overall Bhalla score ( ICC intraclass correlation coefficient , 0.96) with regard to the severity of bronchiectasis ( ICC intraclass correlation coefficient , 0.87) and sacculations and abscesses ( ICC intraclass correlation coefficient , 0.84). Interobserver agreement was excellent ( ICC intraclass correlation coefficient , 0.86-1). For patients with CF cystic fibrosis , a dedicated chest CT protocol can replace the two yearly follow-up chest radiographic examinations without major dose penalty and with similar diagnostic quality compared with conventional CT.

  10. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  11. Short-Term Estimates of Growth Using Curriculum-Based Measurement of Oral Reading Fluency: Estimating Standard Error of the Slope to Construct Confidence Intervals

    ERIC Educational Resources Information Center

    Christ, Theodore J.

    2006-01-01

    Curriculum-based measurement of oral reading fluency (CBM-R) is an established procedure used to index the level and trend of student growth. A substantial literature base exists regarding best practices in the administration and interpretation of CBM-R; however, research has yet to adequately address the potential influence of measurement error.…

  12. Estimated loss of juvenile salmonids to predation by northern squawfish, walleyes, and smallmouth bass in John Day Reservoir, Columbia River

    USGS Publications Warehouse

    Rieman, Bruce E.; Beamesderfer, Raymond C.; Vigg, Steven; Poe, Thomas P.

    1991-01-01

    We estimated the loss of juvenile salmonids Oncorhynchus spp. to predation by northern squawfish Ptychocheilus oregonensis, walleyes Stizostedion vitreum, and smallmouth bass Micropterus dolomieu in John Day Reservoir during 1983–1986. Our estimates were based on measures of daily prey consumption, predator numbers, and numbers of juvenile salmonids entering the reservoir during the April–August period of migration. We estimated the mean annual loss was 2.7 million juvenile salmonids (95% confidence interval, 1.9–3.3 million). Northern squawfish were responsible for 78% of the total loss; walleyes accounted for 13% and smallmouth bass for 9%. Twenty-one percent of the loss occurred in a small area immediately below McNary Dam at the head of John Day Reservoir. We estimated that the three predator species consumed 14% (95% confidence interval, 9–19%) of all juvenile salmonids that entered the reservoir. Mortality changed by month and increased late in the migration season. Monthly mortality estimates ranged from 7% in June to 61% in August. Mortality from predation was highest for chinook salmon O. tshawytscha, which migrated in July and August. Despite uncertainties in the estimates, it is clear that predation by resident fish predators can easily account for previously unexplained mortality of out-migrating juvenile salmonids. Alteration of the Columbia River by dams and a decline in the number of salmonids could have increased the fraction of mortality caused by predation over what it was in the past.

  13. UK population norms for the modified dental anxiety scale with percentile calculator: adult dental health survey 2009 results

    PubMed Central

    2013-01-01

    Background A recent UK population survey of oral health included questions to assess dental anxiety to provide mean and prevalence estimates of this important psychological construct. Methods A two-stage cluster sample was used for the survey across England, Wales, and Northern Ireland. The survey took place between October-December 2009, and January-April 2010. All interviewers were trained on survey procedures. Within the 7,233 households sampled there were 13,509 adults who were asked to participate in the survey and 11,382 participated (84%). Results The scale was reliable and showed some evidence of unidimensionality. Estimated proportion of participants with high dental anxiety (cut-off score = 19) was 11.6%. Percentiles and confidence intervals were presented and can be estimated for individual patients across various age ranges and gender using an on-line tool. Conclusions The largest reported data set on the MDAS from a representative UK sample was presented. The scale’s psychometrics is supportive for the routine assessment of patient dental anxiety to compare against a number of major demographic groups categorised by age and sex. Practitioners within the UK have a resource to estimate the rarity of a particular patient’s level of dental anxiety, with confidence intervals, when using the on-line percentile calculator. PMID:23799962

  14. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  15. Using Landsat data to estimate evapotranspiration of winter wheat

    NASA Technical Reports Server (NTRS)

    Kanemasu, E. T.; Heilman, J. L.; Bagley, J. O.; Powers, W. L.

    1977-01-01

    Results obtained from an evapotranspiration model as applied to Kansas winter wheatfields were compared with results determined by a weighing lysimeter, and the standard deviation was found to be less than 0.5 mm/day (however, the 95% confidence interval was between plus and minus 0.2 mm/day). Model inputs are solar radiation, temperature, precipitation, and leaf area index; an equation was developed to estimate the leaf area index from Landsat data. The model provides estimates of transpiration, evaporation, and soil moisture.

  16. Trends in hospitalizations of pregnant HIV-infected women in the United States: 2004 through 2011.

    PubMed

    Ewing, Alexander C; Datwani, Hema M; Flowers, Lisa M; Ellington, Sascha R; Jamieson, Denise J; Kourtis, Athena P

    2016-10-01

    With the development and widespread use of combination antiretroviral therapy, HIV-infected women live longer, healthier lives. Previous research has shown that, since the adoption of combination antiretroviral therapy in the United States, rates of morbidity and adverse obstetric outcomes remained higher for HIV-infected pregnant women compared with HIV-uninfected pregnant women. Monitoring trends in the outcomes these women experience is essential, as recommendations for this special population continue to evolve with the progress of HIV treatment and prevention options. We conducted an analysis comparing rates of hospitalizations and associated outcomes among HIV-infected and HIV-uninfected pregnant women in the United States from 2004 through 2011. We used cross-sectional hospital discharge data for girls and women age 15-49 from the 2004, 2007, and 2011 Nationwide Inpatient Sample, a nationally representative sample of US hospital discharges. Demographic characteristics, morbidity outcomes, and time trends were compared using χ(2) tests and multivariate logistic regression. Analyses were weighted to produce national estimates. In 2011, there were 4751 estimated pregnancy hospitalizations and 3855 delivery hospitalizations for HIV-infected pregnant women; neither increased since 2004. Compared with those of HIV-uninfected women, pregnancy hospitalizations of HIV-infected women were more likely to be longer, be in the South and Northeast, be covered by public insurance, and incur higher charges (all P < .005). Hospitalizations among pregnant women with HIV infection had higher rates for many adverse outcomes. Compared to 2004, hospitalizations of HIV-infected pregnant women in 2011 had higher odds of gestational diabetes (adjusted odds ratio, 1.81; 95% confidence interval, 1.16-2.84), preeclampsia/hypertensive disorders of pregnancy (adjusted odds ratio, 1.58; 95% confidence interval, 1.12-2.24), viral/mycotic/parasitic infections (adjusted odds ratio, 1.90; 95% confidence interval, 1.69-2.14), and bacterial infections (adjusted odds ratio, 2.54; 95% confidence interval, 1.53-4.20). Bacterial infections did not increase among hospitalizations of HIV-uninfected pregnant women. The numbers of hospitalizations during pregnancy and delivery have not increased for HIV-infected women since 2004, a departure from previously estimated trends. Pregnancy hospitalizations of HIV-infected women remain more medically complex than those of HIV-uninfected women. An increasing trend in infections among the delivery hospitalizations of HIV-infected pregnant women warrant further attention. Published by Elsevier Inc.

  17. Estimating times of extinction in the fossil record

    PubMed Central

    Marshall, Charles R.

    2016-01-01

    Because the fossil record is incomplete, the last fossil of a taxon is a biased estimate of its true time of extinction. Numerous methods have been developed in the palaeontology literature for estimating the true time of extinction using ages of fossil specimens. These methods, which typically give a confidence interval for estimating the true time of extinction, differ in the assumptions they make and the nature and amount of data they require. We review the literature on such methods and make some recommendations for future directions. PMID:27122005

  18. Estimating times of extinction in the fossil record.

    PubMed

    Wang, Steve C; Marshall, Charles R

    2016-04-01

    Because the fossil record is incomplete, the last fossil of a taxon is a biased estimate of its true time of extinction. Numerous methods have been developed in the palaeontology literature for estimating the true time of extinction using ages of fossil specimens. These methods, which typically give a confidence interval for estimating the true time of extinction, differ in the assumptions they make and the nature and amount of data they require. We review the literature on such methods and make some recommendations for future directions. © 2016 The Author(s).

  19. Assessment of NHTSA’s Report “Relationships Between Fatality Risk, Mass, and Footprint in Model Year 2003-2010 Passenger Cars and LTVs”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Tom

    NHTSA recently completed a logistic regression analysis updating its 2003, 2010, and 2012 studies of the relationship between vehicle mass and US fatality risk per vehicle mile traveled (VMT; Kahane 2010, Kahane 2012, Puckett 2016). The new study updates the 2012 analysis using FARS data from 2005 to 2011 for model year 2003 to 2010. Using the updated databases, NHTSA estimates that reducing vehicle mass by 100 pounds while holding footprint fixed would increase fatality risk per VMT by 1.49% for lighter-than-average cars and by 0.50% for heavierthan- average cars, but reduce risk by 0.10% for lighter-than-average light-duty trucks, bymore » 0.71% for heavier-than-average light-duty trucks, and by 0.99% for CUVs/minivans. Using a jack knife method to estimate the statistical uncertainty of these point estimates, NHTSA finds that none of these estimates are statistically significant at the 95% confidence level; however, the 1.49% increase in risk associated with mass reduction in lighter-than-average cars, and the 0.71% and 0.99% decreases in risk associated with mass reduction in heavier-than-average light trucks and CUVs/minivans, are statistically significant at the 90% confidence interval. The effect of mass reduction on risk that NHTSA estimated in 2016 is more beneficial than in its 2012 study, particularly for light trucks and CUVs/minivans. The 2016 NHTSA analysis estimates that reducing vehicle footprint by one square foot while holding mass constant would increase fatality risk per VMT by 0.28% in cars, by 0.38% in light trucks, and by 1.18% in CUVs and minivans.This report replicates the 2016 NHTSA analysis, and reproduces their main results. This report uses the confidence intervals output by the logistic regression models, which are smaller than the intervals NHTSA estimated using a jack-knife technique that accounts for the sampling error in the FARS fatality and state crash data. In addition to reproducing the NHTSA results, this report also examines the NHTSA data in slightly different ways to get a deeper understanding of the relationship between vehicle weight, footprint, and safety. The results of the NHTSA baseline results, and these alternative analyses, are summarized in Table ES.1; statistically significant estimates, based on the confidence intervals output by the logistic regression models, are shown in red in the tables. We found that NHTSA’s reasonable assumption that all vehicles will have ESC installed by 2017 in its baseline regression model slightly increases the estimated increase in risk from mass reduction in cars, but substantially decreases the estimated increase in risk from footprint reduction in all three vehicle types (Alternative 1 in Table ES.1; explained in more detail in Section 2.1 of this report). This is because NHTSA projects ESC to substantially reduce the number of fatalities in rollovers and crashes with stationary objects, and mass reduction appears to reduce risk, while footprint reduction appears to increase risk, in these types of crashes, particularly in cars and CUVs/minivans. A single regression model including all crash types results in slightly different estimates of the relationship between decreasing mass and risk, as shown in Alternative 2 in Table ES.1.« less

  20. Simulated effect of tobacco tax variation on population health in California.

    PubMed

    Kaplan, R M; Ake, C F; Emery, S L; Navarro, A M

    2001-02-01

    This study simulated the effects of tobacco excise tax increases on population health. Five simulations were used to estimate health outcomes associated with tobacco tax policies: (1) the effects of price on smoking prevalence; (2) the effects of tobacco use on years of potential life lost; (3) the effect of tobacco use on quality of life (morbidity); (4) the integration of prevalence, mortality, and morbidity into a model of quality adjusted life years (QALYs); and (5) the development of confidence intervals around these estimates. Effects were estimated for 1 year after the tax's initiation and 75 years into the future. In California, a $0.50 tax increase and price elasticity of -0.40 would result in about 8389 QALYs (95% confidence interval [CI] = 4629, 12,113) saved the first year. Greater benefits would accrue each year until a steady state was reached after 75 years, when 52,136 QALYs (95% CI = 38,297, 66,262) would accrue each year. Higher taxes would produce even greater health benefits. A tobacco excise tax may be among a few policy options that will enhance a population's health status while making revenues available to government.

  1. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  2. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

    PubMed

    Chakraborty, Hrishikesh; Hossain, Akhtar

    2018-03-01

    The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Effect of increased exercise in school children on physical fitness and endothelial progenitor cells: a prospective randomized trial.

    PubMed

    Walther, Claudia; Gaede, Luise; Adams, Volker; Gelbrich, Götz; Leichtle, Alexander; Erbs, Sandra; Sonnabend, Melanie; Fikenzer, Kati; Körner, Antje; Kiess, Wieland; Bruegel, Mathias; Thiery, Joachim; Schuler, Gerhard

    2009-12-01

    The aim of this prospective, randomized study was to examine whether additional school exercise lessons would result in improved peak oxygen uptake (primary end point) and body mass index-standard deviation score, motor and coordinative abilities, circulating progenitor cells, and high-density lipoprotein cholesterol (major secondary end points). Seven sixth-grade classes (182 children, aged 11.1+/-0.7 years) were randomized to an intervention group (4 classes with 109 students) with daily school exercise lessons for 1 year and a control group (3 classes with 73 students) with regular school sports twice weekly. The significant effects of intervention estimated from ANCOVA adjusted for intraclass correlation were the following: increase of peak o(2) (3.7 mL/kg per minute; 95% confidence interval, 0.3 to 7.2) and increase of circulating progenitor cells evaluated by flow cytometry (97 cells per 1 x 10(6) leukocytes; 95% confidence interval, 13 to 181). No significant difference was seen for body mass index-standard deviation score (-0.08; 95% confidence interval, -0.28 to 0.13); however, there was a trend to reduction of the prevalence of overweight and obese children in the intervention group (from 12.8% to 7.3%). No treatment effect was seen for motor and coordinative abilities (4; 95% confidence interval, -1 to 8) and high-density lipoprotein cholesterol (0.03 mmol/L; 95% confidence interval, -0.08 to 0.14). Regular physical activity by means of daily school exercise lessons has a significant positive effect on physical fitness (o(2)max). Furthermore, the number of circulating progenitor cells can be increased, and there is a positive trend in body mass index-standard deviation score reduction and motor ability improvement. Therefore, we conclude that primary prevention by means of increasing physical activity should start in childhood. URL: http://www.clinicaltrials.gov. Identifier: NCT00176371.

  4. Evaluation of some random effects methodology applicable to bird ringing data

    USGS Publications Warehouse

    Burnham, K.P.; White, Gary C.

    2002-01-01

    Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.

  5. Automated Semantic Indexing of Figure Captions to Improve Radiology Image Retrieval

    PubMed Central

    Kahn, Charles E.; Rubin, Daniel L.

    2009-01-01

    Objective We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. Design The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Measurements Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Results Estimated precision was 0.897 (95% confidence interval, 0.857–0.937). Estimated recall was 0.930 (95% confidence interval, 0.838–1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Conclusion Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval. PMID:19261938

  6. Exposure to Traffic-related Air Pollution During Pregnancy and Term Low Birth Weight: Estimation of Causal Associations in a Semiparametric Model

    PubMed Central

    Padula, Amy M.; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B.

    2012-01-01

    Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000–2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants. PMID:23045474

  7. The regression discontinuity design showed to be a valid alternative to a randomized controlled trial for estimating treatment effects.

    PubMed

    Maas, Iris L; Nolte, Sandra; Walter, Otto B; Berger, Thomas; Hautzinger, Martin; Hohagen, Fritz; Lutz, Wolfgang; Meyer, Björn; Schröder, Johanna; Späth, Christina; Klein, Jan Philipp; Moritz, Steffen; Rose, Matthias

    2017-02-01

    To compare treatment effect estimates obtained from a regression discontinuity (RD) design with results from an actual randomized controlled trial (RCT). Data from an RCT (EVIDENT), which studied the effect of an Internet intervention on depressive symptoms measured with the Patient Health Questionnaire (PHQ-9), were used to perform an RD analysis, in which treatment allocation was determined by a cutoff value at baseline (PHQ-9 = 10). A linear regression model was fitted to the data, selecting participants above the cutoff who had received the intervention (n = 317) and control participants below the cutoff (n = 187). Outcome was PHQ-9 sum score 12 weeks after baseline. Robustness of the effect estimate was studied; the estimate was compared with the RCT treatment effect. The final regression model showed a regression coefficient of -2.29 [95% confidence interval (CI): -3.72 to -.85] compared with a treatment effect found in the RCT of -1.57 (95% CI: -2.07 to -1.07). Although the estimates obtained from two designs are not equal, their confidence intervals overlap, suggesting that an RD design can be a valid alternative for RCTs. This finding is particularly important for situations where an RCT may not be feasible or ethical as is often the case in clinical research settings. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  9. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  10. Tennis Rackets and the Parallel Axis Theorem

    ERIC Educational Resources Information Center

    Christie, Derek

    2014-01-01

    This simple experiment uses an unusual graph straightening exercise to confirm the parallel axis theorem for an irregular object. Along the way, it estimates experimental values for g and the moment of inertia of a tennis racket. We use Excel to find a 95% confidence interval for the true values.

  11. Strategic Use of Random Subsample Replication and a Coefficient of Factor Replicability

    ERIC Educational Resources Information Center

    Katzenmeyer, William G.; Stenner, A. Jackson

    1975-01-01

    The problem of demonstrating replicability of factor structure across random variables is addressed. Procedures are outlined which combine the use of random subsample replication strategies with the correlations between factor score estimates across replicate pairs to generate a coefficient of replicability and confidence intervals associated with…

  12. Site index prediction tables for black, scarlet and white oaks in southeastern Missouri.

    Treesearch

    Robert A. McQuilkin

    1974-01-01

    Site index prediction tables for black, scarlet, and white oaks for southeastern Missouri are presented based on site index/height regressions of data from 741 sectioned trees. Formulae for site index conversion between species and confidence intervals for mean stand site index estimates are also presented.

  13. Two-year analysis for predicting renal function and contralateral hypertrophy after robot-assisted partial nephrectomy: A three-dimensional segmentation technology study.

    PubMed

    Kim, Dae Keun; Jang, Yujin; Lee, Jaeseon; Hong, Helen; Kim, Ki Hong; Shin, Tae Young; Jung, Dae Chul; Choi, Young Deuk; Rha, Koon Ho

    2015-12-01

    To analyze long-term changes in both kidneys, and to predict renal function and contralateral hypertrophy after robot-assisted partial nephrectomy. A total of 62 patients underwent robot-assisted partial nephrectomy, and renal parenchymal volume was calculated using three-dimensional semi-automatic segmentation technology. Patients were evaluated within 1 month preoperatively, and postoperatively at 6 months, 1 year and continued up to 2-year follow up. Linear regression models were used to identify the factors predicting variables that correlated with estimated glomerular filtration rate changes and contralateral hypertrophy 2 years after robot-assisted partial nephrectomy. The median global estimated glomerular filtration rate changes were -10.4%, -11.9%, and -2.4% at 6 months, 1 and 2 years post-robot-assisted partial nephrectomy, respectively. The ipsilateral kidney median parenchymal volume changes were -24%, -24.4%, and -21% at 6 months, 1 and 2 years post-robot-assisted partial nephrectomy, respectively. The contralateral renal volume changes were 2.3%, 9.6% and 12.9%, respectively. On multivariable linear analysis, preoperative estimated glomerular filtration rate was the best predictive factor for global estimated glomerular filtration rate change on 2 years post-robot-assisted partial nephrectomy (B -0.452; 95% confidence interval -0.84 to -0.14; P = 0.021), whereas the parenchymal volume loss rate (B -0.43; 95% confidence interval -0.89 to -0.15; P = 0.017) and tumor size (B 5.154; 95% confidence interval -0.11 to 9.98; P = 0.041) were the significant predictive factors for the degree of contralateral renal hypertrophy on 2 years post-robot-assisted partial nephrectomy. Preoperative estimated glomerular filtration rate significantly affects post-robot-assisted partial nephrectomy renal function. Renal mass size and renal parenchyma volume loss correlates with compensatory hypertrophy of the contralateral kidney. Contralateral hypertrophy of the renal parenchyma compensates for the functional loss of the ipsilateral kidney. © 2015 The Japanese Urological Association.

  14. Genetic determinants of antithyroid drug-induced agranulocytosis by human leukocyte antigen genotyping and genome-wide association study

    PubMed Central

    Chen, Pei-Lung; Shih, Shyang-Rong; Wang, Pei-Wen; Lin, Ying-Chao; Chu, Chen-Chung; Lin, Jung-Hsin; Chen, Szu-Chi; Chang, Ching-Chung; Huang, Tien-Shang; Tsai, Keh Sung; Tseng, Fen-Yu; Wang, Chih-Yuan; Lu, Jin-Ying; Chiu, Wei-Yih; Chang, Chien-Ching; Chen, Yu-Hsuan; Chen, Yuan-Tsong; Fann, Cathy Shen-Jang; Yang, Wei-Shiung; Chang, Tien-Chun

    2015-01-01

    Graves' disease is the leading cause of hyperthyroidism affecting 1.0–1.6% of the population. Antithyroid drugs are the treatment cornerstone, but may cause life-threatening agranulocytosis. Here we conduct a two-stage association study on two separate subject sets (in total 42 agranulocytosis cases and 1,208 Graves' disease controls), using direct human leukocyte antigen genotyping and SNP-based genome-wide association study. We demonstrate HLA-B*38:02 (Armitage trend Pcombined=6.75 × 10−32) and HLA-DRB1*08:03 (Pcombined=1.83 × 10−9) as independent susceptibility loci. The genome-wide association study identifies the same signals. Estimated odds ratios for these two loci comparing effective allele carriers to non-carriers are 21.48 (95% confidence interval=11.13–41.48) and 6.13 (95% confidence interval=3.28–11.46), respectively. Carrying both HLA-B*38:02 and HLA-DRB1*08:03 increases odds ratio to 48.41 (Pcombined=3.32 × 10−21, 95% confidence interval=21.66–108.22). Our results could be useful for antithyroid-induced agranulocytosis and potentially for agranulocytosis caused by other chemicals. PMID:26151496

  15. Genetic determinants of antithyroid drug-induced agranulocytosis by human leukocyte antigen genotyping and genome-wide association study.

    PubMed

    Chen, Pei-Lung; Shih, Shyang-Rong; Wang, Pei-Wen; Lin, Ying-Chao; Chu, Chen-Chung; Lin, Jung-Hsin; Chen, Szu-Chi; Chang, Ching-Chung; Huang, Tien-Shang; Tsai, Keh Sung; Tseng, Fen-Yu; Wang, Chih-Yuan; Lu, Jin-Ying; Chiu, Wei-Yih; Chang, Chien-Ching; Chen, Yu-Hsuan; Chen, Yuan-Tsong; Fann, Cathy Shen-Jang; Yang, Wei-Shiung; Chang, Tien-Chun

    2015-07-07

    Graves' disease is the leading cause of hyperthyroidism affecting 1.0-1.6% of the population. Antithyroid drugs are the treatment cornerstone, but may cause life-threatening agranulocytosis. Here we conduct a two-stage association study on two separate subject sets (in total 42 agranulocytosis cases and 1,208 Graves' disease controls), using direct human leukocyte antigen genotyping and SNP-based genome-wide association study. We demonstrate HLA-B*38:02 (Armitage trend Pcombined=6.75 × 10(-32)) and HLA-DRB1*08:03 (Pcombined=1.83 × 10(-9)) as independent susceptibility loci. The genome-wide association study identifies the same signals. Estimated odds ratios for these two loci comparing effective allele carriers to non-carriers are 21.48 (95% confidence interval=11.13-41.48) and 6.13 (95% confidence interval=3.28-11.46), respectively. Carrying both HLA-B*38:02 and HLA-DRB1*08:03 increases odds ratio to 48.41 (Pcombined=3.32 × 10(-21), 95% confidence interval=21.66-108.22). Our results could be useful for antithyroid-induced agranulocytosis and potentially for agranulocytosis caused by other chemicals.

  16. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  17. Exact Scheffé-type confidence intervals for output from groundwater flow models: 2. Combined use of hydrogeologic information and calibration data

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.

  18. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    PubMed

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  19. Estimating the Richness of a Population When the Maximum Number of Classes Is Fixed: A Nonparametric Solution to an Archaeological Problem

    PubMed Central

    Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.

    2012-01-01

    Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316

  20. Credible occurrence probabilities for extreme geophysical events: earthquakes, volcanic eruptions, magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.

    2012-01-01

    Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.

  1. Relative ratios of collagen composition of periarticular tissue of joints of the upper limb.

    PubMed

    Cheah, A; Harris, A; Le, W; Huang, Y; Yao, J

    2017-07-01

    We investigated the relative ratios of collagen composition of periarticular tissue of the elbow, wrist, metacarpophalangeal, proximal and distal interphalangeal joints. Periarticulat tissue, which we defined as the ligaments, palmar plate and capsule, was harvested from ten fresh-frozen cadaveric upper limbs, yielding 50 samples. The mean paired differences (95% confidence interval) of the relative ratios of collagen between the five different joints were estimated using mRNA expression of collagen in the periarticular tissue. We found that the relative collagen composition of the elbow was not significantly different to that of the proximal interphalangeal joint, nor between the proximal interphalangeal joint and distal interphalangeal joint, whereas the differences in collagen composition between all the other paired comparisons of the joints had confidence intervals that did not include zero.

  2. A Simple Method for Deriving the Confidence Regions for the Penalized Cox’s Model via the Minimand Perturbation†

    PubMed Central

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496

  3. A Simple Method for Deriving the Confidence Regions for the Penalized Cox's Model via the Minimand Perturbation.

    PubMed

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.

  4. Myocardial perfusion magnetic resonance imaging using sliding-window conjugate-gradient highly constrained back-projection reconstruction for detection of coronary artery disease.

    PubMed

    Ma, Heng; Yang, Jun; Liu, Jing; Ge, Lan; An, Jing; Tang, Qing; Li, Han; Zhang, Yu; Chen, David; Wang, Yong; Liu, Jiabin; Liang, Zhigang; Lin, Kai; Jin, Lixin; Bi, Xiaoming; Li, Kuncheng; Li, Debiao

    2012-04-15

    Myocardial perfusion magnetic resonance imaging (MRI) with sliding-window conjugate-gradient highly constrained back-projection reconstruction (SW-CG-HYPR) allows whole left ventricular coverage, improved temporal and spatial resolution and signal/noise ratio, and reduced cardiac motion-related image artifacts. The accuracy of this technique for detecting coronary artery disease (CAD) has not been determined in a large number of patients. We prospectively evaluated the diagnostic performance of myocardial perfusion MRI with SW-CG-HYPR in patients with suspected CAD. A total of 50 consecutive patients who were scheduled for coronary angiography with suspected CAD underwent myocardial perfusion MRI with SW-CG-HYPR at 3.0 T. The perfusion defects were interpreted qualitatively by 2 blinded observers and were correlated with x-ray angiographic stenoses ≥50%. The prevalence of CAD was 56%. In the per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SW-CG-HYPR was 96% (95% confidence interval 82% to 100%), 82% (95% confidence interval 60% to 95%), 87% (95% confidence interval 70% to 96%), 95% (95% confidence interval 74% to100%), and 90% (95% confidence interval 82% to 98%), respectively. In the per-vessel analysis, the corresponding values were 98% (95% confidence interval 91% to 100%), 89% (95% confidence interval 80% to 94%), 86% (95% confidence interval 76% to 93%), 99% (95% confidence interval 93% to 100%), and 93% (95% confidence interval 89% to 97%), respectively. In conclusion, myocardial perfusion MRI using SW-CG-HYPR allows whole left ventricular coverage and high resolution and has high diagnostic accuracy in patients with suspected CAD. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Multivitamin use and risk of stroke mortality: the Japan collaborative cohort study.

    PubMed

    Dong, Jia-Yi; Iso, Hiroyasu; Kitamura, Akihiko; Tamakoshi, Akiko

    2015-05-01

    An effect of multivitamin supplement on stroke risk is uncertain. We aimed to examine the association between multivitamin use and risk of death from stroke and its subtypes. A total of 72 180 Japanese men and women free from cardiovascular diseases and cancers at baseline in 1988 to 1990 were followed up until December 31, 2009. Lifestyles including multivitamin use were collected using self-administered questionnaires. Cox proportional hazards regression models were used to estimate hazard ratios (HRs) of total stroke and its subtypes in relation to multivitamin use. During a median follow-up of 19.1 years, we identified 2087 deaths from stroke, including 1148 ischemic strokes and 877 hemorrhagic strokes. After adjustment for potential confounders, multivitamin use was associated with lower but borderline significant risk of death from total stroke (HR, 0.87; 95% confidence interval, 0.76-1.01), primarily ischemic stroke (HR, 0.80; 95% confidence interval, 0.63-1.01), but not hemorrhagic stroke (HR, 0.96; 95% confidence interval, 0.78-1.18). In a subgroup analysis, there was a significant association between multivitamin use and lower risk of mortality from total stroke among people with fruit and vegetable intake <3 times/d (HR, 0.80; 95% confidence interval, 0.65-0.98). That association seemed to be more evident among regular users than casual users. Similar results were found for ischemic stroke. Multivitamin use, particularly frequent use, was associated with reduced risk of total and ischemic stroke mortality among Japanese people with lower intake of fruits and vegetables. © 2015 American Heart Association, Inc.

  6. CYP17 genetic polymorphism, breast cancer, and breast cancer risk factors.

    PubMed

    Ambrosone, Christine B; Moysich, Kirsten B; Furberg, Helena; Freudenheim, Jo L; Bowman, Elise D; Ahmed, Sabrina; Graham, Saxon; Vena, John E; Shields, Peter G

    2003-01-01

    Findings from previous studies regarding the association between the CYP17 genotype and breast cancer are inconsistent. We investigated the role of the MspAI genetic polymorphism in the 5' region of CYP17 on risk of breast cancer and as a modifier of reproductive risk factors. Questionnaire and genotyping data were obtained from a population-based, case-control study of premenopausal (n = 182) and postmenopausal (n = 214) European-American Caucasian women in western New York. Cases and controls were frequency matched by age and by county of residence. Odds ratios and 95% confidence intervals were used to estimate relative risks. The CYP17 genotype was not associated with breast cancer risk; however, controls with the A2/A2 genotype (associated with higher estrogens) had earlier menarche and earlier first full-term pregnancy. Premenopausal women with A1/A1 genotypes, but not with A2 alleles, were at significantly decreased risk with late age at menarche (odds ratio = 0.37, 95% confidence interval = 0.14-0.99), and at increased risk with late age at first full-term pregnancy (odds ratio = 4.30, 95% confidence interval = 1.46-12.67) and with use of oral contraceptives (odds ratio = 3.24, 95% confidence interval = 1.08-9.73). Associations were weaker among postmenopausal women. These results suggest that the effects of factors that may alter breast cancer risk through a hormonal mechanism may be less important among premenopausal women with putative higher lifetime exposures to circulating estrogens related to the CYP17 A2 allele.

  7. Toward earlier detection of choroidal neovascularization secondary to age-related macular degeneration: multicenter evaluation of a preferential hyperacuity perimeter designed as a home device.

    PubMed

    Loewenstein, Anat; Ferencz, Joseph R; Lang, Yaron; Yeshurun, Itamar; Pollack, Ayala; Siegal, Ruth; Lifshitz, Tova; Karp, Joseph; Roth, Daniel; Bronner, Guri; Brown, Justin; Mansour, Sam; Friedman, Scott; Michels, Mark; Johnston, Richards; Rapp, Moshe; Havilio, Moshe; Rafaeli, Omer; Manor, Yair

    2010-01-01

    The primary purpose of this study was to evaluate the ability of a home device preferential hyperacuity perimeter to discriminate between patients with choroidal neovascularization (CNV) and intermediate age-related macular degeneration (AMD), and the secondary purpose was to investigate the dependence of sensitivity on lesion characteristics. All participants were tested with the home device in an unsupervised mode. The first part of this work was retrospective using tests performed by patients with intermediate AMD and newly diagnosed CNV. In the second part, the classifier was prospectively challenged with tests performed by patients with intermediate AMD and newly diagnosed CNV. The dependence of sensitivity on lesion characteristics was estimated with tests performed by patients with CNV of both parts. In 66 eyes with CNV and 65 eyes with intermediate AMD, both sensitivity and specificity were 0.85. In the retrospective part (34 CNV and 43 intermediate AMD), sensitivity and specificity were 0.85 +/- 0.12 (95% confidence interval) and 0.84 +/- 0.11 (95% confidence interval), respectively. In the prospective part (32 CNV and 22 intermediate AMD), sensitivity and specificity were 0.84 +/- 0.13 (95% confidence interval) and 0.86 +/- 0.14 (95% confidence interval), respectively. Chi-square analysis showed no dependence of sensitivity on type (P = 0.44), location (P = 0.243), or size (P = 0.73) of the CNV lesions. A home device preferential hyperacuity perimeter has good sensitivity and specificity in discriminating between patients with newly diagnosed CNV and intermediate AMD. Sensitivity is not dependent on lesion characteristics.

  8. Allopurinol and Cardiovascular Outcomes in Adults With Hypertension.

    PubMed

    MacIsaac, Rachael L; Salatzki, Janek; Higgins, Peter; Walters, Matthew R; Padmanabhan, Sandosh; Dominiczak, Anna F; Touyz, Rhian M; Dawson, Jesse

    2016-03-01

    Allopurinol lowers blood pressure in adolescents and has other vasoprotective effects. Whether similar benefits occur in older individuals remains unclear. We hypothesized that allopurinol is associated with improved cardiovascular outcomes in older adults with hypertension. Data from the United Kingdom Clinical Research Practice Datalink were used. Multivariate Cox-proportional hazard models were applied to estimate hazard ratios for stroke and cardiac events (defined as myocardial infarction or acute coronary syndrome) associated with allopurinol use over a 10-year period in adults aged >65 years with hypertension. A propensity-matched design was used to reduce potential for confounding. Allopurinol exposure was a time-dependent variable and was defined as any exposure and then as high (≥300 mg daily) or low-dose exposure. A total of 2032 allopurinol-exposed patients and 2032 matched nonexposed patients were studied. Allopurinol use was associated with a significantly lower risk of both stroke (hazard ratio, 0.50; 95% confidence interval, 0.32-0.80) and cardiac events (hazard ratio, 0.61; 95% confidence interval, 0.43-0.87) than nonexposed control patients. In exposed patients, high-dose treatment with allopurinol (n=1052) was associated with a significantly lower risk of both stroke (hazard ratio, 0.58; 95% confidence interval, 0.36-0.94) and cardiac events (hazard ratio, 0.65; 95% confidence interval, 0.46-0.93) than low-dose treatment (n=980). Allopurinol use is associated with lower rates of stroke and cardiac events in older adults with hypertension, particularly at higher doses. Prospective clinical trials are needed to evaluate whether allopurinol improves cardiovascular outcomes in adults with hypertension. © 2016 American Heart Association, Inc.

  9. Immediate Vascular Imaging Needed for Efficient Triage of Patients With Acute Ischemic Stroke Initially Admitted to Nonthrombectomy Centers.

    PubMed

    Boulouis, Gregoire; Siddiqui, Khawja-Ahmeruddin; Lauer, Arne; Charidimou, Andreas; Regenhardt, Robert W; Viswanathan, Anand; Leslie-Mazwi, Thabele M; Rost, Natalia; Schwamm, Lee H

    2017-08-01

    Current guidelines for endovascular thrombectomy (EVT) used to select patients for transfer to thrombectomy-capable stroke centers (TSC) may result in unnecessary transfers. We sought to determine the impact of simulated baseline vascular imaging on reducing unnecessary transfers and clinical-imaging factors associated with receiving EVT after transfer. We identified patients with stroke transferred for EVT from 30 referring hospitals between 2010 and 2016 who had a referring hospitals brain computed tomography and repeat imaging on TSC arrival available for review. Initial Alberta Stroke Program Early CT scores and TSC vascular occlusion level were assessed. The main outcome variable was receiving EVT at TSC. Models were simulated to derive optimal triaging parameters for EVT. A total of 508 patients were included in the analysis (mean age, 69±14 years; 42% women). Application at referring hospitals of current guidelines for EVT yielded sensitivity of 92% (95% confidence interval, 0.84-0.96) and specificity of 53% (95% confidence interval, 0.48-0.57) for receiving EVT at TSC. Repeated simulations identified optimal selection criteria for transfer as National Institute of Health Stroke Scale >8 plus baseline vascular imaging (sensitivity=91%; 95% confidence interval, 0.83-0.95; and specificity=80%; 95% confidence interval, 0.75-0.83). Our findings provide quantitative estimates of the claim that implementing vascular imaging at the referring hospitals would result in significantly fewer futile transfers for EVT and a data-driven framework to inform transfer policies. © 2017 American Heart Association, Inc.

  10. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    PubMed

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  11. Estimation of the ARNO model baseflow parameters using daily streamflow data

    NASA Astrophysics Data System (ADS)

    Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu

    1999-09-01

    An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.

  12. Transmissibility of the Ice Bucket Challenge among globally influential celebrities: retrospective cohort study.

    PubMed

    Ni, Michael Y; Chan, Brandford H Y; Leung, Gabriel M; Lau, Eric H Y; Pang, Herbert

    2014-12-16

    To estimate the transmissibility of the Ice Bucket Challenge among globally influential celebrities and to identify associated risk factors. Retrospective cohort study. Social media (YouTube, Facebook, Twitter, Instagram). David Beckham, Cristiano Ronaldo, Benedict Cumberbatch, Stephen Hawking, Mark Zuckerberg, Oprah Winfrey, Homer Simpson, and Kermit the Frog were defined as index cases. We included contacts up to the fifth generation seeded from each index case and enrolled a total of 99 participants into the cohort. Basic reproduction number R0, serial interval of accepting the challenge, and odds ratios of associated risk factors based on fully observed nomination chains; R0 is a measure of transmissibility and is defined as the number of secondary cases generated by a single index in a fully susceptible population. Serial interval is the duration between onset of a primary case and onset of its secondary cases. Based on the empirical data and assuming a branching process we estimated a mean R0 of 1.43 (95% confidence interval 1.23 to 1.65) and a mean serial interval for accepting the challenge of 2.1 days (median 1 day). Higher log (base 10) net worth of the participants was positively associated with transmission (odds ratio 1.63, 95% confidence interval 1.06 to 2.50), adjusting for age and sex. The Ice Bucket Challenge was moderately transmissible among a group of globally influential celebrities, in the range of the pandemic A/H1N1 2009 influenza. The challenge was more likely to be spread by richer celebrities, perhaps in part reflecting greater social influence. © Ni et al 2014.

  13. Verification of a model for the detection of intrauterine growth restriction (IUGR) by receiver operating characteristics (ROC)

    NASA Astrophysics Data System (ADS)

    Liu, Pengbo; Mongelli, Max; Mondry, Adrian

    2004-07-01

    The purpose of this study is to verify by Receiver Operating Characteristics (ROC) a mathematical model supporting the hypothesis that IUGR can be diagnosed by estimating growth velocity. The ROC compare computerized simulation results with clinical data from 325 pregnant British women. Each patient had 6 consecutive ultrasound examinations for fetal abdominal circumference (fac). Customized and un-customized fetal weights were calculated according to Hadlock"s formula. IUGR was diagnosed by the clinical standard, i.e. estimated weight below the tenth percentile. Growth velocity was estimated by calculating the changes of fac (Dzfac/dt) at various time intervals from 3 to 10 weeks. Finally, ROC was used to compare the methods. At 3~4 weeks scan interval, the area under the ROC curve is 0.68 for customized data and 0.66 for the uncustomized data with 95% confidence interval. Comparison between simulation data and real pregnancies verified that the model is clinically acceptable.

  14. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  15. Familial risk of epilepsy: a population-based study

    PubMed Central

    Peljto, Anna L.; Barker-Cummings, Christie; Vasoli, Vincent M.; Leibson, Cynthia L.; Hauser, W. Allen; Buchhalter, Jeffrey R.

    2014-01-01

    Almost all previous studies of familial risk of epilepsy have had potentially serious methodological limitations. Our goal was to address these limitations and provide more rigorous estimates of familial risk in a population-based study. We used the unique resources of the Rochester Epidemiology Project to identify all 660 Rochester, Minnesota residents born in 1920 or later with incidence of epilepsy from 1935–94 (probands) and their 2439 first-degree relatives who resided in Olmsted County. We assessed incidence of epilepsy in relatives by comprehensive review of the relatives’ medical records, and estimated age-specific cumulative incidence and standardized incidence ratios for epilepsy in relatives compared with the general population, according to proband and relative characteristics. Among relatives of all probands, cumulative incidence of epilepsy to age 40 was 4.7%, and risk was increased 3.3-fold (95% confidence interval 2.75–5.99) compared with population incidence. Risk was increased to the greatest extent in relatives of probands with idiopathic generalized epilepsies (standardized incidence ratio 6.0) and epilepsies associated with intellectual or motor disability presumed present from birth, which we denoted ‘prenatal/developmental cause’ (standardized incidence ratio 4.3). Among relatives of probands with epilepsy without identified cause (including epilepsies classified as ‘idiopathic’ or ‘unknown cause’), risk was significantly increased for epilepsy of prenatal/developmental cause (standardized incidence ratio 4.1). Similarly, among relatives of probands with prenatal/developmental cause, risk was significantly increased for epilepsies without identified cause (standardized incidence ratio 3.8). In relatives of probands with generalized epilepsy, standardized incidence ratios were 8.3 (95% confidence interval 2.93–15.31) for generalized epilepsy and 2.5 (95% confidence interval 0.92–4.00) for focal epilepsy. In relatives of probands with focal epilepsy, standardized incidence ratios were 1.0 (95% confidence interval 0.00–2.19) for generalized epilepsy and 2.6 (95% confidence interval 1.19–4.26) for focal epilepsy. Epilepsy incidence was greater in offspring of female probands than in offspring of male probands, and this maternal effect was restricted to offspring of probands with focal epilepsy. The results suggest that risks for epilepsies of unknown and prenatal/developmental cause may be influenced by shared genetic mechanisms. They also suggest that some of the genetic influences on generalized and focal epilepsies are distinct. However, the similar increase in risk for focal epilepsy among relatives of probands with either generalized (2.5-fold) or focal epilepsy (2.6-fold) may reflect some coexisting shared genetic influences. PMID:24468822

  16. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  17. Exposure to Ambient Fine Particulate Air Pollution in Utero as a Risk Factor for Child Stunting in Bangladesh

    PubMed Central

    Canning, David

    2017-01-01

    Pregnant mothers in Bangladesh are exposed to very high and worsening levels of ambient air pollution. Maternal exposure to fine particulate matter has been associated with low birth weight at much lower levels of exposure, leading us to suspect the potentially large effects of air pollution on stunting in children in Bangladesh. We estimate the relationship between exposure to air pollution in utero and child stunting by pooling outcome data from four waves of the nationally representative Bangladesh Demographic and Health Survey conducted between 2004 and 2014, and calculating children’s exposure to ambient fine particulate matter in utero using high resolution satellite data. We find significant increases in the relative risk of child stunting, wasting, and underweight with higher levels of in utero exposure to air pollution, after controlling for other factors that have been found to contribute to child anthropometric failure. We estimate the relative risk of stunting in the second, third, and fourth quartiles of exposure as 1.074 (95% confidence interval: 1.014–1.138), 1.150 (95% confidence interval: 1.069–1.237, and 1.132 (95% confidence interval: 1.031–1.243), respectively. Over half of all children in Bangladesh in our sample were exposed to an annual ambient fine particulate matter level in excess of 46 µg/m3; these children had a relative risk of stunting over 1.13 times that of children in the lowest quartile of exposure. Reducing air pollution in Bangladesh could significantly contribute to the Sustainable Development Goal of reducing child stunting. PMID:29295507

  18. Incidence of Mastitis in the Neonatal Period in a Traditional Breastfeeding Society: Results of a Cohort Study.

    PubMed

    Khanal, Vishnu; Scott, Jane A; Lee, Andy H; Binns, Colin W

    2015-12-01

    Mastitis is a painful problem experienced by breastfeeding women, especially in the first few weeks postpartum. There have been limited studies of the incidence of mastitis from traditionally breastfeeding societies in South Asia. This study investigated the incidence, determinants, and management of mastitis in the first month postpartum, as well as its association with breastfeeding outcomes at 4 and 6 months postpartum, in western Nepal. Subjects were a subsample of 338 mothers participating in a larger prospective cohort study conducted in 2014 in western Nepal. Mothers were interviewed during the first month postpartum and again at 4 and 6 months to obtain information on breastfeeding practices. The association of mastitis and determinant variables was investigated using multivariable logistic regression, and the association with breastfeeding duration was examined using Kaplan-Meier estimation. The incidence of mastitis was 8.0% (95% confidence interval, 5.1%, 10.8%) in the first month postpartum. Prelacteal feeding (adjusted odds ratio = 2.76; 95% confidence interval, 1.03, 7.40) and cesarean section (adjusted odds ratio = 3.52; 95% confidence interval, 1.09, 11.42) were associated with a higher likelihood of mastitis. Kaplan-Meier estimation showed no significant difference in the duration of exclusive breastfeeding among the mothers who experienced an episode of mastitis and those who did not. Roughly one in 10 (8.0%) women experienced mastitis in the first month postpartum, and there appeared to be little effect of mastitis on breastfeeding outcomes. Traditional breastfeeding practices should be encouraged, and the management of mastitis should be included as a part of lactation promotion.

  19. Comparison of reporting of seat belt use by police and crash investigators: variation in agreement by injury severity.

    PubMed

    Schiff, Melissa A; Cummings, Peter

    2004-11-01

    To evaluate agreement between police and trained investigators regarding seat belt use by crash victims, according to injury severity. We used data from the National Accident Sampling System Crashworthiness Data System (CDS) for front seat occupants, 16 years and older, in crashes during 1993-2000. Crashworthiness Data System investigators determined belt use from vehicle inspection, interviews, and medical record information; their assessment was considered the gold standard for this analysis. Occupant severity of injury was categorized in five levels from no injuries to death. We estimated the sensitivity, specificity, and area under receiver operating characteristic curves for police reports of belt use. Among 48,858 occupants, sensitivity of a police report that a belt was used was 95.8% overall and varied only modestly by injury severity. Specificity of a police report that a belt was not used was 69.1% overall; it was the lowest among the uninjured (53.2%) and greatest among the dead (90.4%). The area under the curve was 0.82 (95% confidence interval 0.82-0.83) overall; this was lowest among those not injured (0.75, 95% confidence interval 0.74-0.76) and increased with injury severity to 0.91 (95% confidence interval 0.90-0.93) among those who died. Police usually classify belted crash victims as belted, regardless of injury severity. But they often classify unbelted survivors as belted when they were not. This misclassification may result in exaggerated estimates of seat belt effectiveness in some studies.

  20. Herpes zoster correlates with pyogenic liver abscesses in Taiwan.

    PubMed

    Mei-Ling, Shen; Kuan-Fu, Liao; Sung-Mao, Tsai; Cheng-Li, Lin Ms; Shih-Wei, Lai

    2016-12-01

    The purpose of the paper was to explore the relationship between herpes zoster and pyogenic liver abscesses in Taiwan. This was a nationwide cohort study. Using the database of the Taiwan National Health Insurance Program, there were 33049 subjects aged 20-84 years who were newly diagnosed with herpes zoster from 1998 to 2010 that were selected for our study, and they were our herpes zoster group. 131707 randomly selected subjects without herpes zoster were our non-herpes zoster group. Both groups were matched by sex, age, other comorbidities, and the index year of their herpes zoster diagnosis. The incidence of pyogenic liver abscesses at the end of 2011 was then estimated. The multivariable Cox proportional hazard regression model was used to estimate the hazard ratio and 95% confidence interval for pyogenic liver abscesses associated with herpes zoster and other comorbidities. The overall incidence rate was 1.38-fold higher in the herpes zoster group than in the non-herpes zoster group (4.47 vs. 3.25 per 10000 person-years, 95% confidence interval 1.32, 1.44). After controlling for potential confounding factors, the adjusted hazard ratio of pyogenic liver abscesses was 1.34 in the herpes zoster group (95% confidence interval 1.05, 1.72) when compared with the non-herpes zoster group. Sex (in this case male), age, presence of biliary stones, chronic kidney diseases, chronic liver diseases, cancers, and diabetes mellitus were also significantly associated with pyogenic liver abscesses. Patients with herpes zoster are associated with an increased hazard of developing pyogenic liver abscesses.

  1. Exposure to Ambient Fine Particulate Air Pollution in Utero as a Risk Factor for Child Stunting in Bangladesh.

    PubMed

    Goyal, Nihit; Canning, David

    2017-12-23

    Pregnant mothers in Bangladesh are exposed to very high and worsening levels of ambient air pollution. Maternal exposure to fine particulate matter has been associated with low birth weight at much lower levels of exposure, leading us to suspect the potentially large effects of air pollution on stunting in children in Bangladesh. We estimate the relationship between exposure to air pollution in utero and child stunting by pooling outcome data from four waves of the nationally representative Bangladesh Demographic and Health Survey conducted between 2004 and 2014, and calculating children's exposure to ambient fine particulate matter in utero using high resolution satellite data. We find significant increases in the relative risk of child stunting, wasting, and underweight with higher levels of in utero exposure to air pollution, after controlling for other factors that have been found to contribute to child anthropometric failure. We estimate the relative risk of stunting in the second, third, and fourth quartiles of exposure as 1.074 (95% confidence interval: 1.014-1.138), 1.150 (95% confidence interval: 1.069-1.237, and 1.132 (95% confidence interval: 1.031-1.243), respectively. Over half of all children in Bangladesh in our sample were exposed to an annual ambient fine particulate matter level in excess of 46 µg/m³; these children had a relative risk of stunting over 1.13 times that of children in the lowest quartile of exposure. Reducing air pollution in Bangladesh could significantly contribute to the Sustainable Development Goal of reducing child stunting.

  2. Recolonization of group B Streptococcus (GBS) in women with prior GBS genital colonization in pregnancy.

    PubMed

    Tam, Teresa; Bilinski, Ewa; Lombard, Emily

    2012-10-01

    The purpose of the study is to evaluate the incidence of women with prior GBS genital colonization who have recolonization in subsequent pregnancies. This is a retrospective, cohort study of patients with a prior GBS genital colonization in pregnancy and a subsequent pregnancy with a recorded GBS culture result, from January 2000 through June 2007. Documentation of GBS status was through GBS culture performed between 35 to 37 weeks gestation. Exclusion criteria included pregnancies with unknown GBS status, patients with GBS bacteriuria, women with a previous neonate with GBS disease and GBS finding prior to 35 weeks. Data was analyzed using SPSS 15.0. The sample proportion of subjects with GBS genital colonization and its confidence interval were computed to estimate the incidence rate. Logistic regression was performed to assess potential determinants of GBS colonization. Regression coefficients, odds ratios and associated confidence intervals, and p-values were reported, with significant results reported. There were 371 pregnancies that met the test criteria. There were 151 subsequent pregnancies with GBS genital colonization and 220 without GBS recolonization. The incidence of GBS recolonization on patients with prior GBS genital colonization was 40.7% (95% confidence interval 35.7-45.69%). The incidence rate for the sample was significantly larger than 30% (p < .001), which is the estimated incidence rate for all pregnant women who are GBS carriers regardless of prior history. These results suggest that patients with a history of GBS are at a significantly higher risk of GBS recolonization in subsequent pregnancies.

  3. Quantitative Myocardial Perfusion Imaging Versus Visual Analysis in Diagnosing Myocardial Ischemia: A CE-MARC Substudy.

    PubMed

    Biglands, John D; Ibraheem, Montasir; Magee, Derek R; Radjenovic, Aleksandra; Plein, Sven; Greenwood, John P

    2018-05-01

    This study sought to compare the diagnostic accuracy of visual and quantitative analyses of myocardial perfusion cardiovascular magnetic resonance against a reference standard of quantitative coronary angiography. Visual analysis of perfusion cardiovascular magnetic resonance studies for assessing myocardial perfusion has been shown to have high diagnostic accuracy for coronary artery disease. However, only a few small studies have assessed the diagnostic accuracy of quantitative myocardial perfusion. This retrospective study included 128 patients randomly selected from the CE-MARC (Clinical Evaluation of Magnetic Resonance Imaging in Coronary Heart Disease) study population such that the distribution of risk factors and disease status was proportionate to the full population. Visual analysis results of cardiovascular magnetic resonance perfusion images, by consensus of 2 expert readers, were taken from the original study reports. Quantitative myocardial blood flow estimates were obtained using Fermi-constrained deconvolution. The reference standard for myocardial ischemia was a quantitative coronary x-ray angiogram stenosis severity of ≥70% diameter in any coronary artery of >2 mm diameter, or ≥50% in the left main stem. Diagnostic performance was calculated using receiver-operating characteristic curve analysis. The area under the curve for visual analysis was 0.88 (95% confidence interval: 0.81 to 0.95) with a sensitivity of 81.0% (95% confidence interval: 69.1% to 92.8%) and specificity of 86.0% (95% confidence interval: 78.7% to 93.4%). For quantitative stress myocardial blood flow the area under the curve was 0.89 (95% confidence interval: 0.83 to 0.96) with a sensitivity of 87.5% (95% confidence interval: 77.3% to 97.7%) and specificity of 84.5% (95% confidence interval: 76.8% to 92.3%). There was no statistically significant difference between the diagnostic performance of quantitative and visual analyses (p = 0.72). Incorporating rest myocardial blood flow values to generate a myocardial perfusion reserve did not significantly increase the quantitative analysis area under the curve (p = 0.79). Quantitative perfusion has a high diagnostic accuracy for detecting coronary artery disease but is not superior to visual analysis. The incorporation of rest perfusion imaging does not improve diagnostic accuracy in quantitative perfusion analysis. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  4. Provider use of a participatory decision-making style with youth and caregivers and satisfaction with pediatric asthma visits

    PubMed Central

    Sleath, Betsy; Carpenter, Delesha M; Coyne, Imelda; Davis, Scott A; Hayes Watson, Claire; Loughlin, Ceila E; Garcia, Nacire; Reuland, Daniel S; Tudor, Gail E

    2018-01-01

    Background We conducted a randomized controlled trial to test the effectiveness of an asthma question prompt list with video intervention to engage the youth during clinic visits. We examined whether the intervention was associated with 1) providers including youth and caregiver inputs more into asthma treatment regimens, 2) youth and caregivers rating providers as using more of a participatory decision-making style, and 3) youth and caregivers being more satisfied with visits. Methods English- or Spanish-speaking youth aged 11–17 years with persistent asthma and their caregivers were recruited from four pediatric clinics and randomized to the intervention or usual care groups. The youth in the intervention group watched the video with their caregivers on an iPad and completed a one-page asthma question prompt list before their clinic visits. All visits were audiotaped. Generalized estimating equations were used to analyze the data. Results Forty providers and their patients (n=359) participated in this study. Providers included youth input into the asthma management treatment regimens during 2.5% of visits and caregiver input during 3.3% of visits. The youth in the intervention group were significantly more likely to rate their providers as using more of a participatory decision-making style (odds ratio=1.7, 95% confidence interval=1.1, 2.5). White caregivers were significantly more likely to rate the providers as more participatory (odds ratio=2.3, 95% confidence interval=1.2, 4.4). Youth (beta=4.9, 95% confidence interval=3.3, 6.5) and caregivers (beta=7.5, 95% confidence interval=3.1, 12.0) who rated their providers as being more participatory were significantly more satisfied with their visits. Youth (beta=−1.9, 95% confidence interval=−3.4, −0.4) and caregivers (beta=−8.8, 95% confidence interval=−16.2, −1.3) who spoke Spanish at home were less satisfied with visits. Conclusion The intervention did not increase the inclusion of youth and caregiver inputs into asthma treatment regimens. However, it did increase the youth’s perception of participatory decision-making style of the providers, and this in turn was associated with greater satisfaction. PMID:29785146

  5. Effects of weight loss interventions for adults who are obese on mortality, cardiovascular disease, and cancer: systematic review and meta-analysis.

    PubMed

    Ma, Chenhan; Avenell, Alison; Bolland, Mark; Hudson, Jemma; Stewart, Fiona; Robertson, Clare; Sharma, Pawana; Fraser, Cynthia; MacLennan, Graeme

    2017-11-14

    Objective  To assess whether weight loss interventions for adults with obesity affect all cause, cardiovascular, and cancer mortality, cardiovascular disease, cancer, and body weight. Design  Systematic review and meta-analysis of randomised controlled trials (RCTs) using random effects, estimating risk ratios, and mean differences. Heterogeneity investigated using Cochran's Q and I 2 statistics. Quality of evidence assessed by GRADE criteria. Data sources  Medline, Embase, the Cochrane Central Register of Controlled Trials, and full texts in our trials' registry for data not evident in databases. Authors were contacted for unpublished data. Eligibility criteria for selecting studies  RCTs of dietary interventions targeting weight loss, with or without exercise advice or programmes, for adults with obesity and follow-up ≥1 year. Results  54 RCTs with 30 206 participants were identified. All but one trial evaluated low fat, weight reducing diets. For the primary outcome, high quality evidence showed that weight loss interventions decrease all cause mortality (34 trials, 685 events; risk ratio 0.82, 95% confidence interval 0.71 to 0.95), with six fewer deaths per 1000 participants (95% confidence interval two to 10). For other primary outcomes moderate quality evidence showed an effect on cardiovascular mortality (eight trials, 134 events; risk ratio 0.93, 95% confidence interval 0.67 to 1.31), and very low quality evidence showed an effect on cancer mortality (eight trials, 34 events; risk ratio 0.58, 95% confidence interval 0.30 to 1.11). Twenty four trials (15 176 participants) reported high quality evidence on participants developing new cardiovascular events (1043 events; risk ratio 0.93, 95% confidence interval 0.83 to 1.04). Nineteen trials (6330 participants) provided very low quality evidence on participants developing new cancers (103 events; risk ratio 0.92, 95% confidence interval 0.63 to 1.36). Conclusions  Weight reducing diets, usually low in fat and saturated fat, with or without exercise advice or programmes, may reduce premature all cause mortality in adults with obesity. Systematic review registration  PROSPERO CRD42016033217. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Prospective study of physical activity and risk of postmenopausal breast cancer

    PubMed Central

    Leitzmann, Michael F; Moore, Steven C; Peters, Tricia M; Lacey, James V; Schatzkin, Arthur; Schairer, Catherine; Brinton, Louise A; Albanes, Demetrius

    2008-01-01

    Introduction To prospectively examine the relation of total, vigorous and non-vigorous physical activity to postmenopausal breast cancer risk. Methods We studied 32,269 women enrolled in the Breast Cancer Detection Demonstration Project Follow-up Study. Usual physical activity (including household, occupational and leisure activities) throughout the previous year was assessed at baseline using a self-administered questionnaire. Postmenopausal breast cancer cases were identified through self-reports, death certificates and linkage to state cancer registries. A Cox proportional hazards regression was used to estimate the relative risk and 95% confidence intervals of postmenopausal breast cancer associated with physical activity. Results During 269,792 person-years of follow-up from 1987 to 1998, 1506 new incident cases of postmenopausal breast cancer were ascertained. After adjusting for potential risk factors of breast cancer, a weak inverse association between total physical activity and postmenopausal breast cancer was suggested (relative risk comparing extreme quintiles = 0.87; 95% confidence interval = 0.74 to 1.02; p for trend = 0.21). That relation was almost entirely contributed by vigorous activity (relative risk comparing extreme categories = 0.87; 95% confidence interval = 0.74 to 1.02; p for trend = 0.08). The inverse association with vigorous activity was limited to women who were lean (ie, body mass index <25.0 kg/m2: relative risk = 0.68; 95% confidence interval = 0.54 to 0.85). In contrast, no association with vigorous activity was noted among women who were overweight or obese (ie, body mass index ≥ 25.0 kg/m2: relative risk = 1.18; 95% confidence interval = 0.93 to 1.49; p for interaction = 0.008). Non-vigorous activity showed no relation to breast cancer (relative risk comparing extreme quintiles = 1.02; 95% confidence interval = 0.87 to 1.19; p for trend = 0.86). The physical activity and breast cancer relation was not specific to a certain hormone receptor subtype. Conclusions In this cohort of postmenopausal women, breast cancer risk reduction appeared to be limited to vigorous forms of activity; it was apparent among normal weight women but not overweight women, and the relation did not vary by hormone receptor status. Our findings suggest that physical activity acts through underlying biological mechanisms that are independent of body weight control. PMID:18976449

  7. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  8. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  9. A cross-country Exchange Market Pressure (EMP) dataset.

    PubMed

    Desai, Mohit; Patnaik, Ila; Felman, Joshua; Shah, Ajay

    2017-06-01

    The data presented in this article are related to the research article titled - "An exchange market pressure measure for cross country analysis" (Patnaik et al. [1]). In this article, we present the dataset for Exchange Market Pressure values (EMP) for 139 countries along with their conversion factors, ρ (rho). Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values) for the point estimates of ρ 's. Using the standard errors of estimates of ρ 's, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  10. Estimating the Size of the Methamphetamine-Using Population in New York City Using Network Sampling Techniques.

    PubMed

    Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric

    2012-12-01

    As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.

  11. Evaluation of methods to estimate lake herring spawner abundance in Lake Superior

    USGS Publications Warehouse

    Yule, D.L.; Stockwell, J.D.; Cholwek, G.A.; Evrard, L.M.; Schram, S.; Seider, M.; Symbal, M.

    2006-01-01

    Historically, commercial fishers harvested Lake Superior lake herring Coregonus artedi for their flesh, but recently operators have targeted lake herring for roe. Because no surveys have estimated spawning female abundance, direct estimates of fishing mortality are lacking. The primary objective of this study was to determine the feasibility of using acoustic techniques in combination with midwater trawling to estimate spawning female lake herring densities in a Lake Superior statistical grid (i.e., a 10′ latitude × 10′ longitude area over which annual commercial harvest statistics are compiled). Midwater trawling showed that mature female lake herring were largely pelagic during the night in late November, accounting for 94.5% of all fish caught exceeding 250 mm total length. When calculating acoustic estimates of mature female lake herring, we excluded backscattering from smaller pelagic fishes like immature lake herring and rainbow smelt Osmerus mordax by applying an empirically derived threshold of −35.6 dB. We estimated the average density of mature females in statistical grid 1409 at 13.3 fish/ha and the total number of spawning females at 227,600 (95% confidence interval = 172,500–282,700). Using information on mature female densities, size structure, and fecundity, we estimate that females deposited 3.027 billion (109) eggs in grid 1409 (95% confidence interval = 2.356–3.778 billion). The relative estimation error of the mature female density estimate derived using a geostatistical model—based approach was low (12.3%), suggesting that the employed method was robust. Fishing mortality rates of all mature females and their eggs were estimated at 2.3% and 3.8%, respectively. The techniques described for enumerating spawning female lake herring could be used to develop a more accurate stock–recruitment model for Lake Superior lake herring.

  12. Red and processed meat consumption and risk of incident coronary heart disease, stroke, and diabetes mellitus: a systematic review and meta-analysis.

    PubMed

    Micha, Renata; Wallace, Sarah K; Mozaffarian, Dariush

    2010-06-01

    Meat consumption is inconsistently associated with development of coronary heart disease (CHD), stroke, and diabetes mellitus, limiting quantitative recommendations for consumption levels. Effects of meat intake on these different outcomes, as well as of red versus processed meat, may also vary. We performed a systematic review and meta-analysis of evidence for relationships of red (unprocessed), processed, and total meat consumption with incident CHD, stroke, and diabetes mellitus. We searched for any cohort study, case-control study, or randomized trial that assessed these exposures and outcomes in generally healthy adults. Of 1598 identified abstracts, 20 studies met inclusion criteria, including 17 prospective cohorts and 3 case-control studies. All data were abstracted independently in duplicate. Random-effects generalized least squares models for trend estimation were used to derive pooled dose-response estimates. The 20 studies included 1 218 380 individuals and 23 889 CHD, 2280 stroke, and 10 797 diabetes mellitus cases. Red meat intake was not associated with CHD (n=4 studies; relative risk per 100-g serving per day=1.00; 95% confidence interval, 0.81 to 1.23; P for heterogeneity=0.36) or diabetes mellitus (n=5; relative risk=1.16; 95% confidence interval, 0.92 to 1.46; P=0.25). Conversely, processed meat intake was associated with 42% higher risk of CHD (n=5; relative risk per 50-g serving per day=1.42; 95% confidence interval, 1.07 to 1.89; P=0.04) and 19% higher risk of diabetes mellitus (n=7; relative risk=1.19; 95% confidence interval, 1.11 to 1.27; P<0.001). Associations were intermediate for total meat intake. Consumption of red and processed meat were not associated with stroke, but only 3 studies evaluated these relationships. Consumption of processed meats, but not red meats, is associated with higher incidence of CHD and diabetes mellitus. These results highlight the need for better understanding of potential mechanisms of effects and for particular focus on processed meats for dietary and policy recommendations.

  13. Modified Confidence Intervals for the Mean of an Autoregressive Process.

    DTIC Science & Technology

    1985-08-01

    Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our

  14. Variance estimates and confidence intervals for the Kappa measure of classification accuracy

    Treesearch

    M. A. Kalkhan; R. M. Reich; R. L. Czaplewski

    1997-01-01

    The Kappa statistic is frequently used to characterize the results of an accuracy assessment used to evaluate land use and land cover classifications obtained by remotely sensed data. This statistic allows comparisons of alternative sampling designs, classification algorithms, photo-interpreters, and so forth. In order to make these comparisons, it is...

  15. Modeling Spatial and Temporal Variability of Residential Air Exchange Rates for the Near-Road Exposures and Effects of Urban Air Pollutants Study (NEXUS)

    EPA Science Inventory

    Air pollution health studies often use outdoor concentrations as exposure surrogates. Failure to account for variability of residential infiltration of outdoor pollutants can induce exposure errors and lead to bias and incorrect confidence intervals in health effect estimates. Th...

  16. Monitoring Human Development Goals: A Straightforward (Bayesian) Methodology for Cross-National Indices

    ERIC Educational Resources Information Center

    Abayomi, Kobi; Pizarro, Gonzalo

    2013-01-01

    We offer a straightforward framework for measurement of progress, across many dimensions, using cross-national social indices, which we classify as linear combinations of multivariate country level data onto a univariate score. We suggest a Bayesian approach which yields probabilistic (confidence type) intervals for the point estimates of country…

  17. A randomized controlled non-inferiority study comparing the antiemetic effect between intravenous granisetron and oral azasetron based on estimated 5-HT3 receptor occupancy.

    PubMed

    Endo, Junki; Iihara, Hirotoshi; Yamada, Maya; Yanase, Koumei; Kamiya, Fumihiko; Ito, Fumitaka; Funaguchi, Norihiko; Ohno, Yasushi; Minatoguchi, Shinya; Itoh, Yoshinori

    2012-09-01

    The acute antiemetic effect was compared between oral azasetron and intravenous granisetron based on the 5-hydroxytryptamine(3) (5-HT(3)) receptor occupancy theory. Receptor occupancy was estimated from reported data on plasma concentrations and affinity constants to 5-HT(3) receptor. A randomized non-inferiority study comparing acute antiemetic effects between oral azasetron and intravenous granisetron was performed in 105 patients receiving the first course of carboplatin-based chemotherapy for lung cancer. Azasetron exhibited the highest 5-HT(3) receptor occupancy among various first-generation 5-HT(3) antagonists. The complete response to oral azasetron was shown to be non-inferior to that of intravenous granisetron, in which the risk difference was 0.0004 (95% confidence interval: -0.0519-0.0527). The lower limit of the confidence intervals did not exceed the negative non-inferiority margin (-0.1). The complete response during the overall period was not different (68% versus 67%). Oral azasetron was found to be non-inferior to intravenous granisetron in the acute antiemetic effect against moderately emetogenic chemotherapy.

  18. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  19. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    PubMed

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines

    PubMed Central

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.

    2016-01-01

    Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915

  1. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    PubMed

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.

  2. A probabilistic method for testing and estimating selection differences between populations

    PubMed Central

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-01-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. PMID:26463656

  3. Proposed method to estimate the liquid-vapor accommodation coefficient based on experimental sonoluminescence data.

    PubMed

    Puente, Gabriela F; Bonetto, Fabián J

    2005-05-01

    We used the temporal evolution of the bubble radius in single-bubble sonoluminescence to estimate the water liquid-vapor accommodation coefficient. The rapid changes in the bubble radius that occur during the bubble collapse and rebounds are a function of the actual value of the accommodation coefficient. We selected bubble radius measurements obtained from two different experimental techniques in conjunction with a robust parameter estimation strategy and we obtained that for water at room temperature the mass accommodation coefficient is in the confidence interval [0.217,0.329].

  4. Accuracy and reliability testing of two methods to measure internal rotation of the glenohumeral joint.

    PubMed

    Hall, Justin M; Azar, Frederick M; Miller, Robert H; Smith, Richard; Throckmorton, Thomas W

    2014-09-01

    We compared accuracy and reliability of a traditional method of measurement (most cephalad vertebral spinous process that can be reached by a patient with the extended thumb) to estimates made with the shoulder in abduction to determine if there were differences between the two methods. Six physicians with fellowship training in sports medicine or shoulder surgery estimated measurements in 48 healthy volunteers. Three were randomly chosen to make estimates of both internal rotation measurements for each volunteer. An independent observer made objective measurements on lateral scoliosis films (spinous process method) or with a goniometer (abduction method). Examiners were blinded to objective measurements as well as to previous estimates. Intraclass coefficients for interobserver reliability for the traditional method averaged 0.75, indicating good agreement among observers. The difference in vertebral level estimated by the examiner and the actual radiographic level averaged 1.8 levels. The intraclass coefficient for interobserver reliability for the abduction method averaged 0.81 for all examiners, indicating near-perfect agreement. Confidence intervals indicated that estimates were an average of 8° different from the objective goniometer measurements. Pearson correlation coefficients of intraobserver reliability for the abduction method averaged 0.94, indicating near-perfect agreement within observers. Confidence intervals demonstrated repeated estimates between 5° and 10° of the original. Internal rotation estimates made with the shoulder abducted demonstrated interobserver reliability superior to that of spinous process estimates, and reproducibility was high. On the basis of this finding, we now take glenohumeral internal rotation measurements with the shoulder in abduction and use a goniometer to maximize accuracy and objectivity. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  5. Endogenous pain modulation in chronic orofacial pain: a systematic review and meta-analysis.

    PubMed

    Moana-Filho, Estephan J; Herrero Babiloni, Alberto; Theis-Mahon, Nicole R

    2018-06-15

    Abnormal endogenous pain modulation was suggested as a potential mechanism for chronic pain, ie, increased pain facilitation and/or impaired pain inhibition underlying symptoms manifestation. Endogenous pain modulation function can be tested using psychophysical methods such as temporal summation of pain (TSP) and conditioned pain modulation (CPM), which assess pain facilitation and inhibition, respectively. Several studies have investigated endogenous pain modulation function in patients with nonparoxysmal orofacial pain (OFP) and reported mixed results. This study aimed to provide, through a qualitative and quantitative synthesis of the available literature, overall estimates for TSP/CPM responses in patients with OFP relative to controls. MEDLINE, Embase, and the Cochrane databases were searched, and references were screened independently by 2 raters. Twenty-six studies were included for qualitative review, and 22 studies were included for meta-analysis. Traditional meta-analysis and robust variance estimation were used to synthesize overall estimates for standardized mean difference. The overall standardized estimate for TSP was 0.30 (95% confidence interval: 0.11-0.49; P = 0.002), with moderate between-study heterogeneity (Q [df = 17] = 41.8, P = 0.001; I = 70.2%). Conditioned pain modulation's estimated overall effect size was large but above the significance threshold (estimate = 1.36; 95% confidence interval: -0.09 to 2.81; P = 0.066), with very large heterogeneity (Q [df = 8] = 108.3, P < 0.001; I = 98.0%). Sensitivity analyses did not affect the overall estimate for TSP; for CPM, the overall estimate became significant if specific random-effect models were used or if the most influential study was removed. Publication bias was not present for TSP studies, whereas it substantially influenced CPM's overall estimate. These results suggest increased pain facilitation and trend for pain inhibition impairment in patients with nonparoxysmal OFP.

  6. The measurement of linear frequency drift in oscillators

    NASA Astrophysics Data System (ADS)

    Barnes, J. A.

    1985-04-01

    A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.

  7. On the mass of the local group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    González, Roberto E.; Kravtsov, Andrey V.; Gnedin, Nickolay Y., E-mail: regonzar@astro.puc.cl

    2014-10-01

    We use recent proper motion measurements of the tangential velocity of M31, along with its radial velocity and distance, to derive the likelihood of the sum of halo masses of the Milky Way and M31. This is done using a sample of halo pairs in the Bolshoi cosmological simulation of ΛCDM cosmology selected to match the properties and the environment of the Local Group. The resulting likelihood gives an estimate of the sum of the masses of M {sub MW,} {sub 200c} + M {sub M31,} {sub 200c} = 2.40{sub −1.05}{sup +1.95}×10{sup 12} M{sub ⊙} (90% confidence interval). This estimatemore » is consistent with individual mass estimates for the Milky Way and M31 and is consistent, albeit somewhat on the low side, with the mass estimated using the timing argument. We show that although the timing argument is unbiased on average for all pairs, for pairs constrained to have radial and tangential velocities similar to that of the Local Group the argument overestimates the sum of masses by a factor of 1.6. Using similar technique, we estimate the total dark matter mass enclosed within 1 Mpc from the Local Group barycenter to be M{sub LG}(r<1 Mpc)=4.2{sub −2.0}{sup +3.4}×10{sup 12} M{sub ⊙} (90% confidence interval).« less

  8. Prey Selection by an Apex Predator: The Importance of Sampling Uncertainty

    PubMed Central

    Davis, Miranda L.; Stephens, Philip A.; Willis, Stephen G.; Bassi, Elena; Marcon, Andrea; Donaggio, Emanuela; Capitani, Claudia; Apollonio, Marco

    2012-01-01

    The impact of predation on prey populations has long been a focus of ecologists, but a firm understanding of the factors influencing prey selection, a key predictor of that impact, remains elusive. High levels of variability observed in prey selection may reflect true differences in the ecology of different communities but might also reflect a failure to deal adequately with uncertainties in the underlying data. Indeed, our review showed that less than 10% of studies of European wolf predation accounted for sampling uncertainty. Here, we relate annual variability in wolf diet to prey availability and examine temporal patterns in prey selection; in particular, we identify how considering uncertainty alters conclusions regarding prey selection. Over nine years, we collected 1,974 wolf scats and conducted drive censuses of ungulates in Alpe di Catenaia, Italy. We bootstrapped scat and census data within years to construct confidence intervals around estimates of prey use, availability and selection. Wolf diet was dominated by boar (61.5±3.90 [SE] % of biomass eaten) and roe deer (33.7±3.61%). Temporal patterns of prey densities revealed that the proportion of roe deer in wolf diet peaked when boar densities were low, not when roe deer densities were highest. Considering only the two dominant prey types, Manly's standardized selection index using all data across years indicated selection for boar (mean = 0.73±0.023). However, sampling error resulted in wide confidence intervals around estimates of prey selection. Thus, despite considerable variation in yearly estimates, confidence intervals for all years overlapped. Failing to consider such uncertainty could lead erroneously to the assumption of differences in prey selection among years. This study highlights the importance of considering temporal variation in relative prey availability and accounting for sampling uncertainty when interpreting the results of dietary studies. PMID:23110122

  9. What are the factors that influence the attainment of satisfactory energy intake in pediatric intensive care unit patients receiving enteral or parenteral nutrition?

    PubMed

    de Menezes, Fernanda Souza; Leite, Heitor Pons; Nogueira, Paulo Cesar Koch

    2013-01-01

    Children admitted to the intensive care unit (ICU) are at risk of inadequate energy intake. Although studies have identified factors contributing to an inadequate energy supply in critically ill children, they did not take into consideration the length of time during which patients received their estimated energy requirements after having achieved a satisfactory energy intake. This study aimed to identify factors associated with the non-attainment of estimated energy requirements and consider the time this energy intake is maintained. This was a prospective study involving 207 children hospitalized in the ICU who were receiving enteral and/or parenteral nutrition. The outcome variable studied was whether 90% of the estimated basal metabolic rate was maintained for at least half of the ICU stay (satisfactory energy intake). The exposure variables for outcome were gender, age, diagnosis, use of vasopressors, malnutrition, route of nutritional support, and Pediatric Index of Mortality and Pediatric Logistic Organ Dysfunction scores. Satisfactory energy intake was attained by 20.8% of the patients, within a mean time of 5.07 ± 2.48 d. In a multivariable analysis, a diagnosis of heart disease (odds ratio 3.62, 95% confidence interval 1.03-12.68, P = 0.045) increased the risk of insufficient energy intake, whereas malnutrition (odds ratio 0.43, 95% confidence interval 0.20-0.92, P = 0.030) and the use of parenteral nutrition (odds ratio 0.34, 95% confidence interval 0.15-0.77, P = 0.001) were protective factors against this outcome. A satisfactory energy intake was reached by a small proportion of patients during their ICU stay. Heart disease was an independent risk factor for the non-attainment of satisfactory energy intake, whereas malnutrition and the use of parenteral nutrition were protective factors against this outcome. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Sleep problems and suicide associated with mood instability in the Adult Psychiatric Morbidity Survey, 2007

    PubMed Central

    McDonald, Keltie C; Saunders, Kate EA; Geddes, John R

    2018-01-01

    Objective Mood instability is common in the general population. Mood instability is a precursor to mental illness and associated with a range of negative health outcomes. Sleep disturbance appears to be closely linked with mood instability. This study assesses the association between mood instability and sleep disturbance and the link with suicidal ideation and behaviour in a general population sample in England. Method The Adult Psychiatric Morbidity Survey, 2007 collected detailed information about mental health symptoms and correlates in a representative sample of adult household residents living in England (n = 7303). Mood instability was assessed using the Structured Clinical Interview for DSM-IV Axis-II. Sleep problems were defined as sleeping more than usual or less than usual during the past month. Other dependent variables included medication use and suicidal ideation and behaviour (response rate 57%). Generalized linear modelling was used to estimate the prevalence of mood instability and sleep problems. Logistic regression was used to estimate odds ratios. All estimates were weighted. Results The prevalence of mood instability was 14.7% (95% confidence interval [13.6%, 15.7%]). Sleep problems occurred in 69.8% (95% confidence interval: [66.6%, 73.1%]) of those with mood instability versus 37.6% (95% confidence interval: [36.2%, 39.1%]) of those without mood instability. The use of sedating and non-sedating medications did not influence the association. Sleep problems were significantly associated with suicidal ideation and behaviour even after adjusting for mood instability. Conclusion Sleep problems are highly prevalent in the general population, particularly among those with mood instability. Sleep problems are strongly associated with suicidal ideation and behaviour. Treatments that target risk and maintenance factors that transcend diagnostic boundaries, such as therapies that target sleep disturbance, may be particularly valuable for preventing and addressing complications related to mood instability such as suicide. PMID:28095702

  11. Annual Incidence of Nephrolithiasis among Children and Adults in South Carolina from 1997 to 2012

    PubMed Central

    Ross, Michelle E.; Song, Lihai; Sas, David J.; Keren, Ron; Denburg, Michelle R.; Chu, David I.; Copelovitch, Lawrence; Saigal, Christopher S.; Furth, Susan L.

    2016-01-01

    Background and objectives The prevalence of nephrolithiasis in the United States has increased substantially, but recent changes in incidence with respect to age, sex, and race are not well characterized. This study examined temporal trends in the annual incidence and cumulative risk of nephrolithiasis among children and adults living in South Carolina over a 16-year period. Design, setting, participants, & measurements We performed a population–based, repeated cross–sectional study using the US Census and South Carolina Medical Encounter data, which capture all emergency department visits, surgeries, and admissions in the state. The annual incidence of nephrolithiasis in South Carolina from 1997 to 2012 was estimated, and linear mixed models were used to estimate incidence rate ratios for age, sex, and racial groups. The cumulative risk of nephrolithiasis during childhood and over the lifetime was estimated for males and females in 1997 and 2012. Results Among an at-risk population of 4,625,364 people, 152,925 unique patients received emergency, inpatient, or surgical care for nephrolithiasis. Between 1997 and 2012, the mean annual incidence of nephrolithiasis increased 1% annually from 206 to 239 per 100,000 persons. Among age groups, the greatest increase was observed among 15–19 year olds, in whom incidence increased 26% per 5 years (incidence rate ratio, 1.26; 95% confidence interval, 1.22 to 1.29). Adjusting for age and race, incidence increased 15% per 5 years among females (incidence rate ratio, 1.15; 95% confidence interval, 1.14 to 1.16) but remained stable for males. The incidence among blacks increased 15% more per 5 years compared with whites (incidence rate ratio, 1.15; 95% confidence interval, 1.14 to 1.17). These changes in incidence resulted in doubling of the risk of nephrolithiasis during childhood and a 45% increase in the lifetime risk of nephrolithiasis for women over the study period. Conclusions The incidence of kidney stones has increased among young patients, particularly women, and blacks. PMID:26769765

  12. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  13. Occupation, Gender, Race, and Lung Cancer

    PubMed Central

    Amr, Sania; Wolpert, Beverly; Loffredo, Christopher A.; Zheng, Yun-Ling; Shields, Peter G.; Jones, Raymond; Harris, Curtis C.

    2008-01-01

    Objective To examine associations between occupation and lung cancer by gender and race. Methods We used data from the Maryland Lung Cancer Study of nonsmall cell lung carcinoma (NSCLC), a multicenter case control study, to estimate odds ratios (ORs) of NSCLC in different occupations. Results After adjusting for smoking, environmental tobacco smoke, and other covariates, NSCLC ORs among women but not men were elevated in clerical-sales, service, and transportation-material handling occupations; ORs were significantly increased in all three categories (OR [95% confidence interval]: 4.07 [1.44 to 11.48]; 5.15 [1.62 to 16.34]; 7.82 [1.08 to 56.25], respectively), among black women, but only in transportation-material handling occupations (OR [95% confidence interval[: 3.43 [1.02 to 11.50]) among white women. Conclusions Women, especially black women, in certain occupations had increased NSCLC ORs. PMID:18849762

  14. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  15. Survey of ungulate abundance on Santa Rosa Island, Channel Islands National Park, California, March 2009

    USGS Publications Warehouse

    Griffin, Paul C.; Schoenecker, Kate A.; Gogan, Peter J.; Lubow, Bruce C.

    2009-01-01

    Reliable estimates of elk (Cervus elaphus) and deer (Odocoileus hemionus) abundance on Santa Rosa Island, Channel Islands National Park, California, are required to assess the success of management actions directed at these species. We conducted a double-observer aerial survey of elk on a large portion of Santa Rosa Island on March 19, 2009. All four persons on the helicopter were treated as observers. We used two analytical approaches: (1) with three capture occasions corresponding to three possible observers, pooling the observations from the two rear-seat observers, and (2) with four capture occasions treating each observer separately. Approach 1 resulted in an estimate of 483 elk in the survey zone with a 95-percent confidence interval of 479 to 524 elk. Approach 2 resulted in an estimate of 489 elk in the survey zone with a 95-percent confidence interval of 471 to 535 elk. Approximately 5 percent of the elk groups that were estimated to have been present in the survey area were not seen by any observer. Fog prevented us from collecting double-observer observations for deer as intended on March 20. However, we did count 434 deer during the double-observer counts of elk on March 19. Both the calculated number of elk and the observed number of deer are minimal estimates of numbers of each ungulate species on Santa Rosa Island as weather conditions precluded us from surveying the entire island.

  16. Estimating duration of central venous catheter at time of insertion: Clinician judgment and clinical predictors.

    PubMed

    Holmberg, Mathias J; Andersen, Lars W; Graver, Amanda; Wright, Sharon B; Yassa, David; Howell, Michael D; Donnino, Michael W; Cocchi, Michael N

    2015-12-01

    The aim of this study was to investigate whether clinicians can estimate the length of time a central venous catheter (CVC) will remain in place and to identify variables that may predict CVC duration. We conducted a prospective study of patients admitted to the intensive care unit over a 1-year period. Clinicians estimated the anticipated CVC duration at time of insertion. We collected demographics, medical history, type of intensive care unit, anatomical site of CVC placement, vital signs, laboratory values, Sequential Organ Failure Assessment score, mechanical ventilation, and use of vasopressors. Pearson correlation coefficient was used to assess the correlation between estimated and actual CVC time. We performed multivariable logistic regression to identify predictors of long duration (>5 days). We enrolled 200 patients; median age was 65 years (quartiles 52, 75); 91 (46%) were female; and mortality was 24%. Correlation between estimated and actual CVC time was low (r=0.26; r2=0.07; P<.001). Mechanical ventilation (odds ratio, 2.20; 95% confidence interval, 1.22-3.97; P=.009) at time of insertion and a medical history of cancer (odds ratio, 0.35; 95% confidence interval, 0.16-0.75; P=.007) were significantly associated with long duration. Our results suggest a low correlation between clinician prediction and actual CVC duration. We did not find any strong predictors of long CVC duration identifiable at the time of insertion. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Robust power spectral estimation for EEG data

    PubMed Central

    Melman, Tamar; Victor, Jonathan D.

    2016-01-01

    Background Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. New method Using the multitaper method[1] as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Results Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. Comparison to existing method The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. Conclusion In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. PMID:27102041

  18. Robust power spectral estimation for EEG data.

    PubMed

    Melman, Tamar; Victor, Jonathan D

    2016-08-01

    Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Exercise and insulin resistance in youth: a meta-analysis.

    PubMed

    Fedewa, Michael V; Gist, Nicholas H; Evans, Ellen M; Dishman, Rod K

    2014-01-01

    The prevalence of obesity and diabetes is increasing among children, adolescents, and adults. Although estimates of the efficacy of exercise training on fasting insulin and insulin resistance have been provided, for adults similar estimates have not been provided for youth. This systematic review and meta-analysis provides a quantitative estimate of the effectiveness of exercise training on fasting insulin and insulin resistance in children and adolescents. Potential sources were limited to peer-reviewed articles published before June 25, 2013, and gathered from the PubMed, SPORTDiscus, Physical Education Index, and Web of Science online databases. Analysis was limited to randomized controlled trials by using combinations of the terms adolescent, child, pediatric, youth, exercise training, physical activity, diabetes, insulin, randomized trial, and randomized controlled trial. The authors assessed 546 sources, of which 4.4% (24 studies) were eligible for inclusion. Thirty-two effects were used to estimate the effect of exercise training on fasting insulin, with 15 effects measuring the effect on insulin resistance. Estimated effects were independently calculated by multiple authors, and conflicts were resolved before calculating the overall effect. Based on the cumulative results from these studies, a small to moderate effect was found for exercise training on fasting insulin and improving insulin resistance in youth (Hedges' d effect size = 0.48 [95% confidence interval: 0.22-0.74], P < .001 and 0.31 [95% confidence interval: 0.06-0.56], P < .05, respectively). These results support the use of exercise training in the prevention and treatment of type 2 diabetes.

  20. A global analysis of soil microbial biomass carbon, nitrogen and phosphorus in terrestrial ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xiaofeng; Thornton, Peter E; Post, Wilfred M

    2013-01-01

    Soil microbes play a pivotal role in regulating land-atmosphere interactions; the soil microbial biomass carbon (C), nitrogen (N), phosphorus (P) and C:N:P stoichiometry are important regulators for soil biogeochemical processes; however, the current knowledge on magnitude, stoichiometry, storage, and spatial distribution of global soil microbial biomass C, N, and P is limited. In this study, 3087 pairs of data points were retrieved from 281 published papers and further used to summarize the magnitudes and stoichiometries of C, N, and P in soils and soil microbial biomass at global- and biome-levels. Finally, global stock and spatial distribution of microbial biomass Cmore » and N in 0-30 cm and 0-100 cm soil profiles were estimated. The results show that C, N, and P in soils and soil microbial biomass vary substantially across biomes; the fractions of soil nutrient C, N, and P in soil microbial biomass are 1.6% in a 95% confidence interval of (1.5%-1.6%), 2.9% in a 95% confidence interval of (2.8%-3.0%), and 4.4% in a 95% confidence interval of (3.9%-5.0%), respectively. The best estimates of C:N:P stoichiometries for soil nutrients and soil microbial biomass are 153:11:1, and 47:6:1, respectively, at global scale, and they vary in a wide range among biomes. Vertical distribution of soil microbial biomass follows the distribution of roots up to 1 m depth. The global stock of soil microbial biomass C and N were estimated to be 15.2 Pg C and 2.3 Pg N in the 0-30 cm soil profiles, and 21.2 Pg C and 3.2 Pg N in the 0-100 cm soil profiles. We did not estimate P in soil microbial biomass due to data shortage and insignificant correlation with soil total P and climate variables. The spatial patterns of soil microbial biomass C and N were consistent with those of soil organic C and total N, i.e. high density in northern high latitude, and low density in low latitudes and southern hemisphere.« less

  1. Local setting influences the quantity of household food waste in mid-sized South African towns

    PubMed Central

    Shackleton, Charlie M.

    2017-01-01

    The world faces a food security challenge with approximately 868 million people undernourished and about two billion people suffering from the negative health consequences of micronutrient deficiencies. Yet, it is believed that at least 33% of food produced for human consumption is lost or wasted along the food chain. As food waste has a negative effect on food security, the present study sought to quantify household food waste along the rural-urban continuum in three South African mid-sized towns situated along an agro-ecological gradient. We quantified the types of foods and drinks that households threw away in the previous 48 hours and identified the causes of household food waste in the three sites. More households wasted prepared food (27%) than unprepared food (15%) and drinks (8%). However, households threw away greater quantities of unprepared food in the 48-hour recall period (268.6±610.1 g, 90% confidence interval: 175.5 to 361.7 g) compared to prepared food (121.0±132.4 g, 90% confidence interval: 100.8 to 141.3 g) and drinks (77.0±192.5 ml, 90% confidence interval: 47.7 to 106.4 ml). The estimated per capita food waste (5–10 kg of unprepared food waste, 3–4 kg of prepared food waste and 1–3 litres of drinks waste per person per year) overlaps with that estimated for other developing countries, but lower than most developed countries. However, the estimated average amount of food waste per person per year for this study (12.35 kg) was higher relative to that estimated for developing countries (8.5 kg per person per year). Household food waste was mainly a result of consumer behavior concerning food preparation and storage. Integrated approaches are required to address this developmental issue affecting South African societies, which include promoting sound food management to decrease household food waste. Also, increased awareness and educational campaigns for household food waste reduction interventions are discussed. PMID:29232709

  2. Pneumothorax size measurements on digital chest radiographs: Intra- and inter- rater reliability.

    PubMed

    Thelle, Andreas; Gjerdevik, Miriam; Grydeland, Thomas; Skorge, Trude D; Wentzel-Larsen, Tore; Bakke, Per S

    2015-10-01

    Detailed and reliable methods may be important for discussions on the importance of pneumothorax size in clinical decision-making. Rhea's method is widely used to estimate pneumothorax size in percent based on chest X-rays (CXRs) from three measure points. Choi's addendum is used for anterioposterior projections. The aim of this study was to examine the intrarater and interrater reliability of the Rhea and Choi method using digital CXR in the ward based PACS monitors. Three physicians examined a retrospective series of 80 digital CXRs showing pneumothorax, using Rhea and Choi's method, then repeated in a random order two weeks later. We used the analysis of variance technique by Eliasziw et al. to assess the intrarater and interrater reliability in altogether 480 estimations of pneumothorax size. Estimated pneumothorax sizes ranged between 5% and 100%. The intrarater reliability coefficient was 0.98 (95% one-sided lower-limit confidence interval C 0.96), and the interrater reliability coefficient was 0.95 (95% one-sided lower-limit confidence interval 0.93). This study has shown that the Rhea and Choi method for calculating pneumothorax size has high intrarater and interrater reliability. These results are valid across gender, side of pneumothorax and whether the patient is diagnosed with primary or secondary pneumothorax. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Breastfeeding and intelligence: a systematic review and meta-analysis.

    PubMed

    Horta, Bernardo L; Loret de Mola, Christian; Victora, Cesar G

    2015-12-01

    This study was aimed at systematically reviewing evidence of the association between breastfeeding and performance in intelligence tests. Two independent searches were carried out using Medline, LILACS, SCIELO and Web of Science. Studies restricted to infants and those where estimates were not adjusted for stimulation or interaction at home were excluded. Fixed- and random-effects models were used to pool the effect estimates, and a random-effects regression was used to assess potential sources of heterogeneity. We included 17 studies with 18 estimates of the relationship between breastfeeding and performance in intelligence tests. In a random-effects model, breastfed subjects achieved a higher IQ [mean difference: 3.44 points (95% confidence interval: 2.30; 4.58)]. We found no evidence of publication bias. Studies that controlled for maternal IQ showed a smaller benefit from breastfeeding [mean difference 2.62 points (95% confidence interval: 1.25; 3.98)]. In the meta-regression, none of the study characteristics explained the heterogeneity among the studies. Breastfeeding is related to improved performance in intelligence tests. A positive effect of breastfeeding on cognition was also observed in a randomised trial. This suggests that the association is causal. ©2015 The Authors. Acta Paediatrica published by John Wiley & Sons Ltd on behalf of Foundation Acta Paediatrica.

  4. Estimating Counterfactual Risk Under Hypothetical Interventions in the Presence of Competing Events: Crystalline Silica Exposure and Mortality From two Causes of Death.

    PubMed

    Neophytou, Andreas M; Picciotto, Sally; Brown, Daniel M; Gallagher, Lisa E; Checkoway, Harvey; Eisen, Ellen A; Costello, Sadie

    2018-04-03

    Exposure to silica has been linked to excess risk of lung cancer and non-malignant respiratory disease mortality. In this study we estimated risk for both these outcomes in relation to occupational silica exposure as well as the reduction in risk that would result from hypothetical interventions on exposure in a cohort of exposed workers. Analyses were carried out in an all-male study population consisting of 2342 California diatomaceous earth workers regularly exposed to crystalline silica, followed between 1942 and 2011. We estimated subdistribution risk for each event under the natural course and interventions of interest using the parametric g-formula to adjust for healthy worker survivor bias. The risk ratio for lung cancer mortality comparing an intervention in which a theoretical maximum exposure limit was set at 0.05 mg/m3 (the current U.S. regulatory limit) to the observed exposure concentrations was 0.86 (95% confidence interval: 0.63, 1.22). The corresponding risk ratio for non-malignant respiratory disease mortality was 0.69 (95% confidence interval: 0.52, 0.93). Our findings suggest that risks from both outcomes would have been considerably lower if historical silica exposures in this cohort had not exceeded current regulatory limits.

  5. Testing for clustering at many ranges inflates family-wise error rate (FWE).

    PubMed

    Loop, Matthew Shane; McClure, Leslie A

    2015-01-15

    Testing for clustering at multiple ranges within a single dataset is a common practice in spatial epidemiology. It is not documented whether this approach has an impact on the type 1 error rate. We estimated the family-wise error rate (FWE) for the difference in Ripley's K functions test, when testing at an increasing number of ranges at an alpha-level of 0.05. Case and control locations were generated from a Cox process on a square area the size of the continental US (≈3,000,000 mi2). Two thousand Monte Carlo replicates were used to estimate the FWE with 95% confidence intervals when testing for clustering at one range, as well as 10, 50, and 100 equidistant ranges. The estimated FWE and 95% confidence intervals when testing 10, 50, and 100 ranges were 0.22 (0.20 - 0.24), 0.34 (0.31 - 0.36), and 0.36 (0.34 - 0.38), respectively. Testing for clustering at multiple ranges within a single dataset inflated the FWE above the nominal level of 0.05. Investigators should construct simultaneous critical envelopes (available in spatstat package in R), or use a test statistic that integrates the test statistics from each range, as suggested by the creators of the difference in Ripley's K functions test.

  6. Estimate of overdiagnosis of breast cancer due to mammography after adjustment for lead time. A service screening study in Italy

    PubMed Central

    Paci, Eugenio; Miccinesi, Guido; Puliti, Donella; Baldazzi, Paola; De Lisi, Vincenzo; Falcini, Fabio; Cirilli, Claudia; Ferretti, Stefano; Mangone, Lucia; Finarelli, Alba Carola; Rosso, Stefano; Segnan, Nereo; Stracci, Fabrizio; Traina, Adele; Tumino, Rosario; Zorzi, Manuel

    2006-01-01

    Introduction Excess of incidence rates is the expected consequence of service screening. The aim of this paper is to estimate the quota attributable to overdiagnosis in the breast cancer screening programmes in Northern and Central Italy. Methods All patients with breast cancer diagnosed between 50 and 74 years who were resident in screening areas in the six years before and five years after the start of the screening programme were included. We calculated a corrected-for-lead-time number of observed cases for each calendar year. The number of observed incident cases was reduced by the number of screen-detected cases in that year and incremented by the estimated number of screen-detected cases that would have arisen clinically in that year. Results In total we included 13,519 and 13,999 breast cancer cases diagnosed in the pre-screening and screening years, respectively. In total, the excess ratio of observed to predicted in situ and invasive cases was 36.2%. After correction for lead time the excess ratio was 4.6% (95% confidence interval 2 to 7%) and for invasive cases only it was 3.2% (95% confidence interval 1 to 6%). Conclusion The remaining excess of cancers after individual correction for lead time was lower than 5%. PMID:17147789

  7. Ensemble-Based Source Apportionment of Fine Particulate Matter and Emergency Department Visits for Pediatric Asthma

    PubMed Central

    Gass, Katherine; Balachandran, Sivaraman; Chang, Howard H.; Russell, Armistead G.; Strickland, Matthew J.

    2015-01-01

    Epidemiologic studies utilizing source apportionment (SA) of fine particulate matter have shown that particles from certain sources might be more detrimental to health than others; however, it is difficult to quantify the uncertainty associated with a given SA approach. In the present study, we examined associations between source contributions of fine particulate matter and emergency department visits for pediatric asthma in Atlanta, Georgia (2002–2010) using a novel ensemble-based SA technique. Six daily source contributions from 4 SA approaches were combined into an ensemble source contribution. To better account for exposure uncertainty, 10 source profiles were sampled from their posterior distributions, resulting in 10 time series with daily SA concentrations. For each of these time series, Poisson generalized linear models with varying lag structures were used to estimate the health associations for the 6 sources. The rate ratios for the source-specific health associations from the 10 imputed source contribution time series were combined, resulting in health associations with inflated confidence intervals to better account for exposure uncertainty. Adverse associations with pediatric asthma were observed for 8-day exposure to particles generated from diesel-fueled vehicles (rate ratio = 1.06, 95% confidence interval: 1.01, 1.10) and gasoline-fueled vehicles (rate ratio = 1.10, 95% confidence interval: 1.04, 1.17). PMID:25776011

  8. Exposure to diesel and gasoline engine emissions and the risk of lung cancer.

    PubMed

    Parent, Marie-Elise; Rousseau, Marie-Claude; Boffetta, Paolo; Cohen, Aaron; Siemiatycki, Jack

    2007-01-01

    Pollution from motor vehicles constitutes a major environmental health problem. The present paper describes associations between diesel and gasoline engine emissions and lung cancer, as evidenced in a 1979-1985 population-based case-control study in Montreal, Canada. Cases were 857 male lung cancer patients. Controls were 533 population controls and 1,349 patients with other cancer types. Subjects were interviewed to obtain a detailed lifetime job history and relevant data on potential confounders. Industrial hygienists translated each job description into indices of exposure to several agents, including engine emissions. There was no evidence of excess risks of lung cancer with exposure to gasoline exhaust. For diesel engine emissions, results differed by control group. When cancer controls were considered, there was no excess risk. When population controls were studied, the odds ratios, after adjustments for potential confounders, were 1.2 (95% confidence interval: 0.8, 1.8) for any exposure and 1.6 (95% confidence interval: 0.9, 2.8) for substantial exposure. Confidence intervals between risk estimates derived from the two control groups overlapped considerably. These results provide some limited support for the hypothesis of an excess lung cancer risk due to diesel exhaust but no support for an increase in risk due to gasoline exhaust.

  9. Lung Cancer and Occupation in a Population-based Case-Control Study

    PubMed Central

    Consonni, Dario; De Matteis, Sara; Lubin, Jay H.; Wacholder, Sholom; Tucker, Margaret; Pesatori, Angela Cecilia; Caporaso, Neil E.; Bertazzi, Pier Alberto; Landi, Maria Teresa

    2010-01-01

    The authors examined the relation between occupation and lung cancer in the large, population-based Environment And Genetics in Lung cancer Etiology (EAGLE) case-control study. In 2002–2005 in the Lombardy region of northern Italy, 2,100 incident lung cancer cases and 2,120 randomly selected population controls were enrolled. Lifetime occupational histories (industry and job title) were coded by using standard international classifications and were translated into occupations known (list A) or suspected (list B) to be associated with lung cancer. Smoking-adjusted odds ratios and 95% confidence intervals were calculated with logistic regression. For men, an increased risk was found for list A (177 exposed cases and 100 controls; odds ratio = 1.74, 95% confidence interval: 1.27, 2.38) and most occupations therein. No overall excess was found for list B with the exception of filling station attendants and bus and truck drivers (men) and launderers and dry cleaners (women). The authors estimated that 4.9% (95% confidence interval: 2.0, 7.8) of lung cancers in men were attributable to occupation. Among those in other occupations, risk excesses were found for metal workers, barbers and hairdressers, and other motor vehicle drivers. These results indicate that past exposure to occupational carcinogens remains an important determinant of lung cancer occurrence. PMID:20047975

  10. Outcomes After Direct Thrombectomy or Combined Intravenous and Endovascular Treatment Are Not Different.

    PubMed

    Abilleira, Sònia; Ribera, Aida; Cardona, Pedro; Rubiera, Marta; López-Cancio, Elena; Amaro, Sergi; Rodríguez-Campello, Ana; Camps-Renom, Pol; Cánovas, David; de Miquel, Maria Angels; Tomasello, Alejandro; Remollo, Sebastian; López-Rueda, Antonio; Vivas, Elio; Perendreu, Joan; Gallofré, Miquel

    2017-02-01

    Whether intravenous thrombolysis adds a further benefit when given before endovascular thrombectomy (EVT) is unknown. Furthermore, intravenous thrombolysis delays time to groin puncture, mainly among drip and ship patients. Using region-wide registry data, we selected cases that received direct EVT or combined intravenous thrombolysis+EVT for anterior circulation strokes between January 2011 and October 2015. Treatment effect was estimated by stratification on a propensity score. The average odds ratios for the association of treatment with good outcome and death at 3 months and symptomatic bleedings at 24 hours were calculated with the Mantel-Haenszel test statistic. We included 599 direct EVT patients and 567 patients with combined treatment. Stratification through propensity score achieved balance of baseline characteristics across treatment groups. There was no association between treatment modality and good outcome (odds ratio, 0.97; 95% confidence interval, 0.74-1.27), death (odds ratio, 1.07; 95% confidence interval, 0.74-1.54), or symptomatic bleedings (odds ratio, 0.56; 95% confidence interval, 0.25-1.27). This observational study suggests that outcomes after direct EVT or combined intravenous thrombolysis+EVT are not different. If confirmed by a randomized controlled trial, it may have a significant impact on organization of stroke systems of care. © 2017 American Heart Association, Inc.

  11. A method for meta-analysis of epidemiological studies.

    PubMed

    Einarson, T R; Leeder, J S; Koren, G

    1988-10-01

    This article presents a stepwise approach for conducting a meta-analysis of epidemiological studies based on proposed guidelines. This systematic method is recommended for practitioners evaluating epidemiological studies in the literature to arrive at an overall quantitative estimate of the impact of a treatment. Bendectin is used as an illustrative example. Meta-analysts should establish a priori the purpose of the analysis and a complete protocol. This protocol should be adhered to, and all steps performed should be recorded in detail. To aid in developing such a protocol, we present methods the researcher can use to perform each of 22 steps in six major areas. The illustrative meta-analysis confirmed previous traditional narrative literature reviews that Bendectin is not related to teratogenic outcomes in humans. The overall summary odds ratio was 1.01 (chi 2 = 0.05, p = 0.815) with a 95 percent confidence interval of 0.66-1.55. When the studies were separated according to study type, the summary odds ratio for cohort studies was 0.95 with a 95 percent confidence interval of 0.62-1.45. For case-control studies, the summary odds ratio was 1.27 with a 95 percent confidence interval of 0.83-1.94. The corresponding chi-square values were not statistically significant at the p = 0.05 level.

  12. Antenatal Steroid Therapy for Fetal Lung Maturation and the Subsequent Risk of Childhood Asthma: A Longitudinal Analysis

    PubMed Central

    Pole, Jason D.; Mustard, Cameron A.; To, Teresa; Beyene, Joseph; Allen, Alexander C.

    2010-01-01

    This study was designed to test the hypothesis that fetal exposure to corticosteroids in the antenatal period is an independent risk factor for the development of asthma in early childhood with little or no effect in later childhood. A population-based cohort study of all pregnant women who resided in Nova Scotia, Canada, and gave birth to a singleton fetus between 1989 and 1998 was undertaken. After a priori specified exclusions, 80,448 infants were available for analysis. Using linked health care utilization records, incident asthma cases developed after 36 months of age were identified. Extended Cox proportional hazards models were used to estimate hazard ratios while controlling for confounders. Exposure to corticosteroids during pregnancy was associated with a risk of asthma in childhood between 3–5 years of age: adjusted hazard ratio of 1.19 (95% confidence interval: 1.03, 1.39), with no association noted after 5 years of age: adjusted hazard ratio for 5–7 years was 1.06 (95% confidence interval: 0.86, 1.30) and for 8 or greater years was 0.74 (95% confidence interval: 0.54, 1.03). Antenatal steroid therapy appears to be an independent risk factor for the development of asthma between 3 and 5 years of age. PMID:21490744

  13. Racism, segregation, and risk of obesity in the Black Women's Health Study.

    PubMed

    Cozier, Yvette C; Yu, Jeffrey; Coogan, Patricia F; Bethea, Traci N; Rosenberg, Lynn; Palmer, Julie R

    2014-04-01

    We assessed the relation of experiences of racism to the incidence of obesity and the modifying impact of residential racial segregation in the Black Women's Health Study, a follow-up study of US black women. Racism scores were created from 8 questions asked in 1997 and 2009 about the frequency of "everyday" racism (e.g., "people act as if you are dishonest") and of "lifetime" racism (e.g., unfair treatment on the job). Residential segregation was measured by linking participant addresses to 2000 and 2010 US Census block group data on the percent of black residents. We used Cox proportional hazard models to estimate incidence rate ratios and 95% confidence intervals. Based on 4,315 incident cases of obesity identified from 1997 through 2009, both everyday racism and lifetime racism were positively associated with increased incidence. The incidence rate ratios for women who were in the highest category of everyday racism or lifetime racism in both 1997 and 2009, relative to those in the lowest category, were 1.69 (95% confidence interval: 1.45, 1.96; Ptrend < 0.01) and 1.38 (95% confidence interval: 1.15, 1.66; Ptrend < 0.01), respectively. These associations were not modified by residential segregation. These results suggest that racism contributes to the higher incidence of obesity among African American women.

  14. Adverse childhood experiences and risk of type 2 diabetes: A systematic review and meta-analysis.

    PubMed

    Huang, Hao; Yan, Peipei; Shan, Zhilei; Chen, Sijing; Li, Moying; Luo, Cheng; Gao, Hui; Hao, Liping; Liu, Liegang

    2015-11-01

    It is evident that adverse childhood experiences (ACEs) can influence health status of adult life, but few large-scale studies have assessed the relation of ACEs with type 2 diabetes. This meta-analysis aimed to summarize existing evidence on the link between ACEs and type 2 diabetes in adults. We searched all published studies from PubMed and EMBASE before Aug 2015 using keywords like adverse childhood experiences and diabetes, and scanned references of relevant original articles. We included studies that reported risk estimates for diabetes by ACEs and matched our inclusion criteria. We examined the overall relationship between ACEs and diabetes, and stratified the analyses by type of childhood adversities, study design and outcome measures, respectively. Seven articles fulfilled the inclusion criteria for this Meta-analysis, comprising 4 cohort and 3 cross-section studies. A total of 87,251 participants and 5879 incident cases of type 2 diabetes were reported in these studies. The exposure of ACEs was positively associated with the risk of diabetes with a combined odds ratio of 1.32 (95% confidence interval 1.16 to 1.51) in the total participants. The influence of neglect was most prominent (pooled odds ratio 1.92, 95% confidence interval 1.43 to 2.57) while the effect of physical abuse was least strong (pooled odds ratio 1.30, 95% confidence interval 1.19 to 1.42). The pooled odds ratio associated with sexual abuse was 1.39 with the 95% confidence intervals from 1.28 to 1.52. The results support a significant association of adverse childhood experiences with an elevated risk of type 2 diabetes in adulthood. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Association of Parental Hypertension With Arterial Stiffness in Nonhypertensive Offspring: The Framingham Heart Study.

    PubMed

    Andersson, Charlotte; Quiroz, Rene; Enserro, Danielle; Larson, Martin G; Hamburg, Naomi M; Vita, Joseph A; Levy, Daniel; Benjamin, Emelia J; Mitchell, Gary F; Vasan, Ramachandran S

    2016-09-01

    High arterial stiffness seems to be causally involved in the pathogenesis of hypertension. We tested the hypothesis that offspring of parents with hypertension may display higher arterial stiffness before clinically manifest hypertension, given that hypertension is a heritable condition. We compared arterial tonometry measures in a sample of 1564 nonhypertensive Framingham Heart Study third-generation cohort participants (mean age: 38 years; 55% women) whose parents were enrolled in the Framingham Offspring Study. A total of 468, 715, and 381 participants had 0 (referent), 1, and 2 parents with hypertension. Parental hypertension was associated with greater offspring mean arterial pressure (multivariable-adjusted estimate=2.9 mm Hg; 95% confidence interval, 1.9-3.9, and 4.2 mm Hg; 95% confidence interval, 2.9-5.5, for 1 and 2 parents with hypertension, respectively; P<0.001 for both) and with greater forward pressure wave amplitude (1.6 mm Hg; 95% confidence interval, 0.6-2.7, and 1.9 mm Hg; 95% confidence interval, 0.6-3.2, for 1 and 2 parents with hypertension, respectively; P=0.003 for both). Carotid-femoral pulse wave velocity and augmentation index displayed similar dose-dependent relations with parental hypertension in sex-, age-, and height-adjusted models, but associations were attenuated on further adjustment. Offspring with at least 1 parent in the upper quartile of augmentation index and carotid-femoral pulse wave velocity had significantly higher values themselves (P≤0.02). In conclusion, in this community-based sample of young, nonhypertensive adults, we observed greater arterial stiffness in offspring of parents with hypertension. These observations are consistent with higher vascular stiffness at an early stage in the pathogenesis of hypertension. © 2016 American Heart Association, Inc.

  16. Abnormal P-Wave Axis and Ischemic Stroke: The ARIC Study (Atherosclerosis Risk In Communities).

    PubMed

    Maheshwari, Ankit; Norby, Faye L; Soliman, Elsayed Z; Koene, Ryan J; Rooney, Mary R; O'Neal, Wesley T; Alonso, Alvaro; Chen, Lin Y

    2017-08-01

    Abnormal P-wave axis (aPWA) has been linked to incident atrial fibrillation and mortality; however, the relationship between aPWA and stroke has not been reported. We hypothesized that aPWA is associated with ischemic stroke independent of atrial fibrillation and other stroke risk factors and tested our hypothesis in the ARIC study (Atherosclerosis Risk In Communities), a community-based prospective cohort study. We included 15 102 participants (aged 54.2±5.7 years; 55.2% women; 26.5% blacks) who attended the baseline examination (1987-1989) and without prevalent stroke. We defined aPWA as any value outside 0 to 75° using 12-lead ECGs obtained during study visits. Each case of incident ischemic stroke was classified in accordance with criteria from the National Survey of Stroke by a computer algorithm and adjudicated by physician review. Multivariable Cox regression was used to estimate hazard ratios and 95% confidence intervals for the association of aPWA with stroke. During a mean follow-up of 20.2 years, there were 657 incident ischemic stroke cases. aPWA was independently associated with a 1.50-fold (95% confidence interval, 1.22-1.85) increased risk of ischemic stroke in the multivariable model that included atrial fibrillation. When subtyped, aPWA was associated with a 2.04-fold (95% confidence interval, 1.42-2.95) increased risk of cardioembolic stroke and a 1.32-fold (95% confidence interval, 1.03-1.71) increased risk of thrombotic stroke. aPWA is independently associated with ischemic stroke. This association seems to be stronger for cardioembolic strokes. Collectively, our findings suggest that alterations in atrial electric activation may predispose to cardiac thromboembolism independent of atrial fibrillation. © 2017 American Heart Association, Inc.

  17. Association of Air Pollution Exposures With High-Density Lipoprotein Cholesterol and Particle Number: The Multi-Ethnic Study of Atherosclerosis.

    PubMed

    Bell, Griffith; Mora, Samia; Greenland, Philip; Tsai, Michael; Gill, Ed; Kaufman, Joel D

    2017-05-01

    The relationship between air pollution and cardiovascular disease may be explained by changes in high-density lipoprotein (HDL). We examined the cross-sectional relationship between air pollution and both HDL cholesterol and HDL particle number in the MESA Air study (Multi-Ethnic Study of Atherosclerosis Air Pollution). Study participants were 6654 white, black, Hispanic, and Chinese men and women aged 45 to 84 years. We estimated individual residential ambient fine particulate pollution exposure (PM 2.5 ) and black carbon concentrations using a fine-scale likelihood-based spatiotemporal model and cohort-specific monitoring. Exposure periods were averaged to 12 months, 3 months, and 2 weeks prior to examination. HDL cholesterol and HDL particle number were measured in the year 2000 using the cholesterol oxidase method and nuclear magnetic resonance spectroscopy, respectively. We used multivariable linear regression to examine the relationship between air pollution exposure and HDL measures. A 0.7×10 - 6 m - 1 higher exposure to black carbon (a marker of traffic-related pollution) averaged over a 1-year period was significantly associated with a lower HDL cholesterol (-1.68 mg/dL; 95% confidence interval, -2.86 to -0.50) and approached significance with HDL particle number (-0.55 mg/dL; 95% confidence interval, -1.13 to 0.03). In the 3-month averaging time period, a 5 μg/m 3 higher PM 2.5 was associated with lower HDL particle number (-0.64 μmol/L; 95% confidence interval, -1.01 to -0.26), but not HDL cholesterol (-0.05 mg/dL; 95% confidence interval, -0.82 to 0.71). These data are consistent with the hypothesis that exposure to air pollution is adversely associated with measures of HDL. © 2017 American Heart Association, Inc.

  18. Determining the Number of Ischemic Strokes Potentially Eligible for Endovascular Thrombectomy: A Population-Based Study.

    PubMed

    Chia, Nicholas H; Leyden, James M; Newbury, Jonathan; Jannes, Jim; Kleinig, Timothy J

    2016-05-01

    Endovascular thrombectomy (ET) is standard-of-care for ischemic stroke patients with large vessel occlusion, but estimates of potentially eligible patients from population-based studies have not been published. Such data are urgently needed to rationally plan hyperacute services. Retrospective analysis determined the incidence of ET-eligible ischemic strokes in a comprehensive population-based stroke study (Adelaide, Australia 2009-2010). Stroke patients were stratified via a prespecified eligibility algorithm derived from recent ET trials comprising stroke subtype, pathogenesis, severity, premorbid modified Rankin Score, presentation delay, large vessel occlusion, and target mismatch penumbra. Recognizing centers may interpret recent ET trials either loosely or rigidly; 2 eligibility algorithms were applied: restrictive (key criteria modified Rankin Scale score 0-1, presentation delay <3.5 hours, and target mismatch penumbra) and permissive (modified Rankin Scale score 0-3 and presentation delay <5 hours). In a population of 148 027 people, 318 strokes occurred in the 1-year study period (crude attack rate 215 [192-240] per 100 000 person-years). The number of ischemic strokes eligible by restrictive criteria was 17/258 (7%; 95% confidence intervals 4%-10%) and by permissive criteria, an additional 16 were identified, total 33/258 (13%; 95% confidence intervals 9%-18%). Two of 17 patients (and 6/33 permissive patients) had thrombolysis contraindications. Using the restrictive algorithm, there were 11 (95% confidence intervals 4-18) potential ET cases per 100 000 person-years or 22 (95% confidence intervals 13-31) using the permissive algorithm. In this cohort, ≈7% of ischemic strokes were potentially eligible for ET (13% with permissive criteria). In similar populations, the permissive criteria predict that ≤22 strokes per 100 000 person-years may be eligible for ET. © 2016 American Heart Association, Inc.

  19. Neighborhood socioeconomic status at the age of 40 years and ischemic stroke before the age of 50 years: A nationwide cohort study from Sweden.

    PubMed

    Carlsson, Axel C; Li, Xinjun; Holzmann, Martin J; Ärnlöv, Johan; Wändell, Per; Gasevic, Danijela; Sundquist, Jan; Sundquist, Kristina

    2017-10-01

    Objective We aimed to study the association between neighborhood socioeconomic status at the age of 40 years and risk of ischemic stroke before the age of 50 years. Methods All individuals in Sweden were included if their 40th birthday occurred between 1998 and 2010. National registers were used to categorize neighborhood socioeconomic status into high, middle, and low and to retrieve information on incident ischemic strokes. Hazard ratios and their 95% confidence intervals were estimated. Results A total of 1,153,451 adults (women 48.9%) were followed for a mean of 5.5 years (SD 3.5 years), during which 1777 (0.30%) strokes among men and 1374 (0.24%) strokes among women were recorded. After adjustment for sex, marital status, education level, immigrant status, region of residence, and neighborhood services, there was a lower risk of stroke in residents from high-socioeconomic status neighborhoods (hazard ratio 0.87, 95% confidence interval 0.78-0.96), and an increased risk of stroke in adults from low-socioeconomic status neighborhoods (hazard ratio 1.16, 95% confidence interval 1.06-1.27), compared to their counterparts living in middle-socioeconomic status neighborhoods. After further adjustment for hospital diagnoses of hypertension, diabetes, heart failure, and atrial fibrillation prior to the age of 40, the higher risk in neighborhoods with low socioeconomic status was attenuated, but remained significant (hazard ratio 1.12, 95% confidence interval 1.02-1.23). Conclusions In a nationwide study of individuals between 40 and 50 years, we found that the risk of ischemic stroke differed depending on neighborhood socioeconomic status, which calls for increased efforts to prevent cardiovascular diseases in low socioeconomic status neighborhoods.

  20. Transmissibility of the Ice Bucket Challenge among globally influential celebrities: retrospective cohort study

    PubMed Central

    Chan, Brandford H Y; Leung, Gabriel M; Lau, Eric H Y; Pang, Herbert

    2014-01-01

    Objectives To estimate the transmissibility of the Ice Bucket Challenge among globally influential celebrities and to identify associated risk factors. Design Retrospective cohort study. Setting Social media (YouTube, Facebook, Twitter, Instagram). Participants David Beckham, Cristiano Ronaldo, Benedict Cumberbatch, Stephen Hawking, Mark Zuckerberg, Oprah Winfrey, Homer Simpson, and Kermit the Frog were defined as index cases. We included contacts up to the fifth generation seeded from each index case and enrolled a total of 99 participants into the cohort. Main outcome measures Basic reproduction number R0, serial interval of accepting the challenge, and odds ratios of associated risk factors based on fully observed nomination chains; R0 is a measure of transmissibility and is defined as the number of secondary cases generated by a single index in a fully susceptible population. Serial interval is the duration between onset of a primary case and onset of its secondary cases. Results Based on the empirical data and assuming a branching process we estimated a mean R0 of 1.43 (95% confidence interval 1.23 to 1.65) and a mean serial interval for accepting the challenge of 2.1 days (median 1 day). Higher log (base 10) net worth of the participants was positively associated with transmission (odds ratio 1.63, 95% confidence interval 1.06 to 2.50), adjusting for age and sex. Conclusions The Ice Bucket Challenge was moderately transmissible among a group of globally influential celebrities, in the range of the pandemic A/H1N1 2009 influenza. The challenge was more likely to be spread by richer celebrities, perhaps in part reflecting greater social influence. PMID:25514905

  1. Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

    NASA Astrophysics Data System (ADS)

    Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.

    2003-12-01

    Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.

  2. Estimation of the standardized risk difference and ratio in a competing risks framework: application to injection drug use and progression to AIDS after initiation of antiretroviral therapy.

    PubMed

    Cole, Stephen R; Lau, Bryan; Eron, Joseph J; Brookhart, M Alan; Kitahata, Mari M; Martin, Jeffrey N; Mathews, William C; Mugavero, Michael J

    2015-02-15

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Aro: a machine learning approach to identifying single molecules and estimating classification error in fluorescence microscopy images.

    PubMed

    Wu, Allison Chia-Yi; Rifkin, Scott A

    2015-03-27

    Recent techniques for tagging and visualizing single molecules in fixed or living organisms and cell lines have been revolutionizing our understanding of the spatial and temporal dynamics of fundamental biological processes. However, fluorescence microscopy images are often noisy, and it can be difficult to distinguish a fluorescently labeled single molecule from background speckle. We present a computational pipeline to distinguish the true signal of fluorescently labeled molecules from background fluorescence and noise. We test our technique using the challenging case of wide-field, epifluorescence microscope image stacks from single molecule fluorescence in situ experiments on nematode embryos where there can be substantial out-of-focus light and structured noise. The software recognizes and classifies individual mRNA spots by measuring several features of local intensity maxima and classifying them with a supervised random forest classifier. A key innovation of this software is that, by estimating the probability that each local maximum is a true spot in a statistically principled way, it makes it possible to estimate the error introduced by image classification. This can be used to assess the quality of the data and to estimate a confidence interval for the molecule count estimate, all of which are important for quantitative interpretations of the results of single-molecule experiments. The software classifies spots in these images well, with >95% AUROC on realistic artificial data and outperforms other commonly used techniques on challenging real data. Its interval estimates provide a unique measure of the quality of an image and confidence in the classification.

  4. Incidence of Mastitis in the Neonatal Period in a Traditional Breastfeeding Society: Results of a Cohort Study

    PubMed Central

    Scott, Jane A.; Lee, Andy H.; Binns, Colin W.

    2015-01-01

    Abstract Background: Mastitis is a painful problem experienced by breastfeeding women, especially in the first few weeks postpartum. There have been limited studies of the incidence of mastitis from traditionally breastfeeding societies in South Asia. This study investigated the incidence, determinants, and management of mastitis in the first month postpartum, as well as its association with breastfeeding outcomes at 4 and 6 months postpartum, in western Nepal. Subjects and Methods: Subjects were a subsample of 338 mothers participating in a larger prospective cohort study conducted in 2014 in western Nepal. Mothers were interviewed during the first month postpartum and again at 4 and 6 months to obtain information on breastfeeding practices. The association of mastitis and determinant variables was investigated using multivariable logistic regression, and the association with breastfeeding duration was examined using Kaplan–Meier estimation. Results: The incidence of mastitis was 8.0% (95% confidence interval, 5.1%, 10.8%) in the first month postpartum. Prelacteal feeding (adjusted odds ratio = 2.76; 95% confidence interval, 1.03, 7.40) and cesarean section (adjusted odds ratio = 3.52; 95% confidence interval, 1.09, 11.42) were associated with a higher likelihood of mastitis. Kaplan–Meier estimation showed no significant difference in the duration of exclusive breastfeeding among the mothers who experienced an episode of mastitis and those who did not. Conclusions: Roughly one in 10 (8.0%) women experienced mastitis in the first month postpartum, and there appeared to be little effect of mastitis on breastfeeding outcomes. Traditional breastfeeding practices should be encouraged, and the management of mastitis should be included as a part of lactation promotion. PMID:26488802

  5. Ambient PM2.5 and Stroke: Effect Modifiers and Population Attributable Risk in Six Low- and Middle-Income Countries.

    PubMed

    Lin, Hualiang; Guo, Yanfei; Di, Qian; Zheng, Yang; Kowal, Paul; Xiao, Jianpeng; Liu, Tao; Li, Xing; Zeng, Weilin; Howard, Steven W; Nelson, Erik J; Qian, Zhengmin; Ma, Wenjun; Wu, Fan

    2017-05-01

    Short-term exposure to ambient fine particulate pollution (PM 2.5 ) has been linked to increased stroke. Few studies, however, have examined the effects of long-term exposure. A total of 45 625 participants were interviewed and included in this study, the participants came from the Study on Global Ageing and Adult Health, a prospective cohort in 6 low- and middle-income countries. Ambient PM 2.5 levels were estimated for participants' communities using satellite data. A multilevel logistic regression model was used to examine the association between long-term PM 2.5 exposure and stroke. Potential effect modification by physical activity and consumption of fruit and vegetables was assessed. The odds of stroke were 1.13 (95% confidence interval, 1.04-1.22) for each 10 μg/m 3 increase in PM 2.5 . This effect remained after adjustment for confounding factors including age, sex, smoking, and indoor air pollution (adjusted odds ratio=1.12; 95% confidence interval, 1.04-1.21). Further stratified analyses suggested that participants with higher levels of physical activity had greater odds of stroke, whereas those with higher consumption of fruit and vegetables had lower odds of stroke. These effects remained robust in sensitivity analyses. We further estimated that 6.55% (95% confidence interval, 1.97%-12.01%) of the stroke cases could be attributable to ambient PM 2.5 in the study population. This study suggests that ambient PM 2.5 may increase the risk of stroke and may be responsible for the astounding stroke burden in low- and middle-income countries. In addition, greater physical activity may enhance, whereas greater consumption of fruit and vegetables may mitigate the effect. © 2017 American Heart Association, Inc.

  6. Determinants of preterm birth rates in Canada from 1981 through 1983 and from 1992 through 1994.

    PubMed

    Joseph, K S; Kramer, M S; Marcoux, S; Ohlsson, A; Wen, S W; Allen, A; Platt, R

    1998-11-12

    The rates of preterm birth have increased in many countries, including Canada, over the past 20 years. However, the factors underlying the increase are poorly understood. We used data from the Statistics Canada live-birth and stillbirth data bases to determine the effects of changes in the frequency of multiple births, registration of births occurring very early in gestation, patterns of obstetrical intervention, and use of ultrasonographic dating of gestational age on the rates of preterm birth in Canada from 1981 through 1983 and from 1992 through 1994. All births in 9 of the 12 provinces and territories of Canada were included. Logistic-regression analysis and Poisson regression analysis were used to estimate changes between the two three-year periods, after adjustment for the above-mentioned determinants of the likelihood of preterm births. Preterm births increased from 6.3 percent of live births in 1981 through 1983 to 6.8 percent in 1992 through 1994, a relative increase of 9 percent (95 percent confidence interval, 7 to 10 percent). Among singleton births, preterm births increased by 5 percent (95 percent confidence interval, 3 to 6 percent). Multiple births increased from 1.9 percent to 2.1 percent of all live births; the rates of preterm birth among live births resulting from multiple gestations increased by 25 percent (95 percent confidence interval, 21 to 28 percent). Adjustment for the determinants of the likelihood of preterm birth reduced the increase in the rate of preterm birth to 3 percent among all live births and 1 percent among singleton births. The recent increase in preterm births in Canada is largely attributable to changes in the frequency of multiple births, obstetrical intervention, and the use of ultrasound-based estimates of gestational age.

  7. Nonalcoholic fatty liver disease: MR imaging of liver proton density fat fraction to assess hepatic steatosis.

    PubMed

    Tang, An; Tan, Justin; Sun, Mark; Hamilton, Gavin; Bydder, Mark; Wolfson, Tanya; Gamst, Anthony C; Middleton, Michael; Brunt, Elizabeth M; Loomba, Rohit; Lavine, Joel E; Schwimmer, Jeffrey B; Sirlin, Claude B

    2013-05-01

    To evaluate the diagnostic performance of magnetic resonance (MR) imaging-estimated proton density fat fraction (PDFF) for assessing hepatic steatosis in nonalcoholic fatty liver disease (NAFLD) by using centrally scored histopathologic validation as the reference standard. This prospectively designed, cross-sectional, internal review board-approved, HIPAA-compliant study was conducted in 77 patients who had NAFLD and liver biopsy. MR imaging-PDFF was estimated from magnitude-based low flip angle multiecho gradient-recalled echo images after T2* correction and multifrequency fat modeling. Histopathologic scoring was obtained by consensus of the Nonalcoholic Steatohepatitis (NASH) Clinical Research Network Pathology Committee. Spearman correlation, additivity and variance stabilization for regression for exploring the effect of a number of potential confounders, and receiver operating characteristic analyses were performed. Liver MR imaging-PDFF was systematically higher, with higher histologic steatosis grade (P < .001), and was significantly correlated with histologic steatosis grade (ρ = 0.69, P < .001). The correlation was not confounded by age, sex, lobular inflammation, hepatocellular ballooning, NASH diagnosis, fibrosis, or magnetic field strength (P = .65). Area under the receiver operating characteristic curves was 0.989 (95% confidence interval: 0.968, 1.000) for distinguishing patients with steatosis grade 0 (n = 5) from those with grade 1 or higher (n = 72), 0.825 (95% confidence interval: 0.734, 0.915) to distinguish those with grade 1 or lower (n = 31) from those with grade 2 or higher (n = 46), and 0.893 (95% confidence interval: 0.809, 0.977) to distinguish those with grade 2 or lower (n = 58) from those with grade 3 (n = 19). MR imaging-PDFF showed promise for assessment of hepatic steatosis grade in patients with NAFLD. For validation, further studies with larger sample sizes are needed. © RSNA, 2013.

  8. Association between Lithium Use and Melanoma Risk and Mortality: A Population-Based Study.

    PubMed

    Asgari, Maryam M; Chien, Andy J; Tsai, Ai Lin; Fireman, Bruce; Quesenberry, Charles P

    2017-10-01

    Laboratory studies show that lithium, an activator of the Wnt/ß-catenin signaling pathway, slows melanoma progression, but to our knowledge no published epidemiologic studies have explored this association. We conducted a retrospective cohort study of adult white Kaiser Permanente Northern California members (n = 2,213,848) from 1997-2012 to examine the association between lithium use and melanoma risk. Lithium exposure (n = 11,317) was assessed from pharmacy databases, serum lithium levels were obtained from electronic laboratory databases, and incident cutaneous melanomas (n = 14,056) were identified from an established cancer registry. In addition to examining melanoma incidence, melanoma hazard ratios and 95% confidence intervals for lithium exposure were estimated using Cox proportional hazards models, adjusted for potential confounders. Melanoma incidence per 100,000 person-years among lithium-exposed individuals was 67.4, compared with 92.5 in unexposed individuals (P = 0.027). Lithium-exposed individuals had a 32% lower risk of melanoma (hazard ratio = 0.68, 95% confidence interval = 0.51-0.90) in unadjusted analysis, but the estimate was attenuated and nonsignificant in adjusted analysis (adjusted hazard ratio = 0.77, 95% confidence interval = 0.58-1.02). No lithium-exposed individuals presented with thick (>4 mm) or advanced-stage melanoma at diagnosis. Among melanoma patients, lithium-exposed individuals were less likely to suffer melanoma-associated mortality (rate = 4.68/1,000 person-years) compared with the unexposed (rate = 7.21/1,000 person-years). Our findings suggest that lithium may reduce melanoma risk and associated mortality. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Bronchiolitis in US emergency departments 1992 to 2000: epidemiology and practice variation.

    PubMed

    Mansbach, Jonathan M; Emond, Jennifer A; Camargo, Carlos A

    2005-04-01

    To describe the epidemiology of US emergency department (ED) visits for bronchiolitis, including the characteristics of children presenting to the ED and the variability in bronchiolitis care in the ED. Data were obtained from the 1992 to 2000 National Hospital Ambulatory Medical Care Survey. Cases had International Classification of Diseases, Ninth Revision, Clinical Modification code 466 and were younger than 2 years. National estimates were obtained using assigned patient visit weights; 95% confidence intervals were calculated using the relative standard error of the estimate; analysis used chi2 and logistic regression. From 1992 to 2000, bronchiolitis accounted for approximately 1,868,000 ED visits for children younger than 2 years. Among this same age group, the overall rate was 26 (95% confidence interval 22-31) per 1000 US population and 31 (95% confidence interval 26-36) per 1000 ED visits. These rates were stable over the 9-year period. Comparing children with bronchiolitis to those presenting with other problems, children with bronchiolitis were more likely boys (61% vs. 53%; P = 0.01) and Hispanic (27% vs. 20%; P = 0.008). Therapeutic interventions varied and 19% were admitted to the hospital. The multivariate predictor for receiving systemic steroids was urgent/emergent status at triage (odds ratio 4.0, 1.9-8.4). Multivariate predictors for admission were Hispanic ethnicity (odds ratio 2.3, 1.1-5.0) and urgent/emergent status at triage (odds ratio 3.7, 2.0-6.9). ED visit rates for bronchiolitis among children younger than 2 years were stable between 1992 and 2000. The observed ED practice variation demonstrates that children are receiving medications for which there is little supporting evidence. Boys and Hispanics are at-risk groups for presentation to the ED, and Hispanics are more likely to be hospitalized.

  10. Placenta previa and risk of major congenital malformations among singleton births in Finland.

    PubMed

    Kancherla, Vijaya; Räisänen, Sari; Gissler, Mika; Kramer, Michael R; Heinonen, Seppo

    2015-06-01

    Placenta previa has been associated with adverse birth outcomes, but its association with congenital malformations is inconclusive. We examined the association between placenta previa and major congenital malformations among singleton births in Finland. We performed a retrospective population register-based study on all singletons born at or after 22+0 weeks of gestation in Finland during 2000 to 2010. We linked three national health registers: the Finnish Medical Birth Register, the Hospital Discharge Register, and the Register of Congenital Malformations, and examined several demographic and clinical characteristics among women with and without placenta previa, in association with major congenital malformations. We estimated adjusted odds ratios and 95% confidence intervals using multivariable logistic regression models. The prevalence of placenta previa was estimated as 2.65 per 1000 singleton births in Finland (95% confidence interval, 2.53-2.79). Overall, 6.2% of women with placenta previa delivered a singleton infant with a major congenital malformation, compared with 3.8% of unaffected women (p ≤ 0.001). Placenta previa was positively associated with almost 1.6-fold increased risk of major congenital malformations in the offspring, after controlling for maternal age, parity, fetal sex, smoking, socio-economic status, chorionic villus biopsy, In vitro fertilization, pre-existing diabetes, depression, preeclampsia, and prior caesarean section (adjusted odds ratio = 1.55; 95% confidence interval, 1.27-1.90). Using a large population-based study, we found that placenta previa was weakly, but significantly associated with an increased risk of major congenital malformations in singleton births. Future studies should examine the association between placenta previa and individual types of congenital malformations, specifically in high-risk pregnancies. © 2015 Wiley Periodicals, Inc.

  11. A Comparison of Composite Reliability Estimators: Coefficient Omega Confidence Intervals in the Current Literature

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2016-01-01

    Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…

  12. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  13. Assessing Live Fuel Moisture For Fire Management Applications

    Treesearch

    David R. Weise; Roberta A. Hartford; Larry Mahaffey

    1998-01-01

    The variation associated with sampling live fuel moisture was examined for several shrub and canopy fuels in southern California, Arizona, and Colorado. Ninety-five % confidence intervals ranged from 5 to % . Estimated sample sizes varied greatly. The value of knowing the live fuel moisture content in fire decision making is unknown. If the fuel moisture is highly...

  14. ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).

    PubMed

    Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew

    2016-09-01

    This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.

  15. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…

  16. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  17. Comparison of referral and non-referral hypertensive disorders during pregnancy: an analysis of 271 consecutive cases at a tertiary hospital.

    PubMed

    Liu, Ching-Ming; Chang, Shuenn-Dyh; Cheng, Po-Jen

    2005-05-01

    This retrospective cohort study analyzed the clinical manifestations in patients with preeclampsia and eclampsia, assessed the risk factors compared to the severity of hypertensive disorders on maternal and perinatal morbidity, and mortality between the referral and non-referral patients. 271 pregnant women with preeclampsia and eclampsia were assessed (1993 to 1997). Chi-square analysis was used for the comparison of categorical variables, and the comparison of the two independent variables of proportions in estimation of confidence intervals and calculated odds ratio of the referral and non-referral groups. Multivariate logistic regression was used for adjusting potential confounding risk factors. Of the 271 patients included in this study, 71 (26.2%) patients were referrals from other hospitals. Most of the 62 (87.3%) referral patients were transferred during the period 21 and 37 weeks of gestation. Univariate analysis revealed that referral patients with hypertensive disorder were significantly associated with SBP > or =180, DBP > or =105, severe preclampsia, haemolysis, elevated liver enzymes, low platelets (HELLP), emergency C/S, maternal complications, and low birth weight babies, as well as poor Apgar score. Multivariate logistic regression analyses revealed that the risk factors identified to be significantly associated with increased risk of referral patients included: diastolic blood pressure above 105 mmHg (adjusted odds ratio, 2.09; 95 percent confidence interval, 1.06 to 4.13; P = 0.034), severe preeclampsia (adjusted odds ratio, 3.46; 95 percent confidence interval, 1.76 to 6.81; P < 0.001), eclampsia (adjusted odds ratio, 2.77; 95 percent confidence interval, 0.92 to 8.35; P = 0.071), HELLP syndrome (adjusted odds ratio, 18.81; 95 percent confidence interval, 2.14 to 164.99; P = 0.008). The significant factors associated with the referral patients with hypertensive disorders were severe preeclampsia, HELLP, and eclampsia. Lack of prenatal care was the major avoidable factor found in referral and high risk patients. Time constraints relating to referral patients and the appropriateness of patient-centered care for patient safety and better quality of health care need further investigation on national and multi-center clinical trials.

  18. Dietary Protein and Potassium, Diet-Dependent Net Acid Load, and Risk of Incident Kidney Stones.

    PubMed

    Ferraro, Pietro Manuel; Mandel, Ernest I; Curhan, Gary C; Gambaro, Giovanni; Taylor, Eric N

    2016-10-07

    Protein and potassium intake and the resulting diet-dependent net acid load may affect kidney stone formation. It is not known whether protein type or net acid load is associated with risk of kidney stones. We prospectively examined intakes of protein (dairy, nondairy animal, and vegetable), potassium, and animal protein-to-potassium ratio (an estimate of net acid load) and risk of incident kidney stones in the Health Professionals Follow-Up Study ( n =42,919), the Nurses' Health Study I ( n =60,128), and the Nurses' Health Study II ( n =90,629). Multivariable models were adjusted for age, body mass index, diet, and other factors. We also analyzed cross-sectional associations with 24-hour urine ( n =6129). During 3,108,264 person-years of follow-up, there were 6308 incident kidney stones. Dairy protein was associated with lower risk in the Nurses' Health Study II (hazard ratio for highest versus lowest quintile, 0.84; 95% confidence interval, 0.73 to 0.96; P value for trend <0.01). The hazard ratios for nondairy animal protein were 1.15 (95% confidence interval, 0.97 to 1.36; P value for trend =0.04) in the Health Professionals Follow-Up Study and 1.20 (95% confidence interval, 0.99 to 1.46; P value for trend =0.06) in the Nurses' Health Study I. Potassium intake was associated with lower risk in all three cohorts (hazard ratios from 0.44 [95% confidence interval, 0.36 to 0.53] to 0.67 [95% confidence interval, 0.57 to 0.78]; P values for trend <0.001). Animal protein-to-potassium ratio was associated with higher risk ( P value for trend =0.004), even after adjustment for animal protein and potassium. Higher dietary potassium was associated with higher urine citrate, pH, and volume ( P values for trend <0.002). Kidney stone risk may vary by protein type. Diets high in potassium or with a relative abundance of potassium compared with animal protein could represent a means of stone prevention. Copyright © 2016 by the American Society of Nephrology.

  19. Decisions about Renal Replacement Therapy in Patients with Advanced Kidney Disease in the US Department of Veterans Affairs, 2000-2011.

    PubMed

    Wong, Susan P Y; Hebert, Paul L; Laundry, Ryan J; Hammond, Kenric W; Liu, Chuan-Fen; Burrows, Nilka R; O'Hare, Ann M

    2016-10-07

    It is not known what proportion of United States patients with advanced CKD go on to receive RRT. In other developed countries, receipt of RRT is highly age dependent and the exception rather than the rule at older ages. We conducted a retrospective study of a national cohort of 28,568 adults who were receiving care within the US Department of Veteran Affairs and had a sustained eGFR <15 ml/min per 1.73 m 2 between January 1, 2000 to December 31, 2009. We used linked administrative data from the US Renal Data System, US Department of Veteran Affairs, and Medicare to identify cohort members who received RRT during follow-up through October 1, 2011 ( n =19,165). For a random 25% sample of the remaining 9403 patients, we performed an in-depth review of their VA-wide electronic medical records to determine the treatment status of their CKD. Two thirds (67.1%) of cohort members received RRT on the basis of administrative data. On the basis of the results of chart review, we estimate that an additional 7.5% (95% confidence interval, 7.2% to 7.8%) of cohort members had, in fact, received dialysis, that 10.9% (95% confidence interval, 10.6% to 11.3%) were preparing for and/or discussing dialysis but had not started dialysis at most recent follow-up, and that a decision had been made not to pursue dialysis in 14.5% (95% confidence interval, 14.1% to 14.9%). The percentage of cohort members who received or were preparing to receive RRT ranged from 96.2% (95% confidence interval, 94.4% to 97.4%) for those <45 years old to 53.3% (95% confidence interval, 50.7% to 55.9%) for those aged ≥85 years old. Results were similar after stratification by tertile of Gagne comorbidity score. In this large United States cohort of patients with advanced CKD, the majority received or were preparing to receive RRT. This was true even among the oldest patients with the highest burden of comorbidity. Copyright © 2016 by the American Society of Nephrology.

  20. Decisions about Renal Replacement Therapy in Patients with Advanced Kidney Disease in the US Department of Veterans Affairs, 2000–2011

    PubMed Central

    Hebert, Paul L.; Laundry, Ryan J.; Hammond, Kenric W.; Liu, Chuan-Fen; Burrows, Nilka R.; O’Hare, Ann M.

    2016-01-01

    Background and objectives It is not known what proportion of United States patients with advanced CKD go on to receive RRT. In other developed countries, receipt of RRT is highly age dependent and the exception rather than the rule at older ages. Design, setting, participants, & measurements We conducted a retrospective study of a national cohort of 28,568 adults who were receiving care within the US Department of Veteran Affairs and had a sustained eGFR <15 ml/min per 1.73 m2 between January 1, 2000 to December 31, 2009. We used linked administrative data from the US Renal Data System, US Department of Veteran Affairs, and Medicare to identify cohort members who received RRT during follow-up through October 1, 2011 (n=19,165). For a random 25% sample of the remaining 9403 patients, we performed an in-depth review of their VA–wide electronic medical records to determine the treatment status of their CKD. Results Two thirds (67.1%) of cohort members received RRT on the basis of administrative data. On the basis of the results of chart review, we estimate that an additional 7.5% (95% confidence interval, 7.2% to 7.8%) of cohort members had, in fact, received dialysis, that 10.9% (95% confidence interval, 10.6% to 11.3%) were preparing for and/or discussing dialysis but had not started dialysis at most recent follow-up, and that a decision had been made not to pursue dialysis in 14.5% (95% confidence interval, 14.1% to 14.9%). The percentage of cohort members who received or were preparing to receive RRT ranged from 96.2% (95% confidence interval, 94.4% to 97.4%) for those <45 years old to 53.3% (95% confidence interval, 50.7% to 55.9%) for those aged ≥85 years old. Results were similar after stratification by tertile of Gagne comorbidity score. Conclusions In this large United States cohort of patients with advanced CKD, the majority received or were preparing to receive RRT. This was true even among the oldest patients with the highest burden of comorbidity. PMID:27660306

  1. Dietary Protein and Potassium, Diet–Dependent Net Acid Load, and Risk of Incident Kidney Stones

    PubMed Central

    Mandel, Ernest I.; Curhan, Gary C.; Gambaro, Giovanni; Taylor, Eric N.

    2016-01-01

    Background and objectives Protein and potassium intake and the resulting diet–dependent net acid load may affect kidney stone formation. It is not known whether protein type or net acid load is associated with risk of kidney stones. Design, setting, participants, & measurements We prospectively examined intakes of protein (dairy, nondairy animal, and vegetable), potassium, and animal protein-to-potassium ratio (an estimate of net acid load) and risk of incident kidney stones in the Health Professionals Follow-Up Study (n=42,919), the Nurses’ Health Study I (n=60,128), and the Nurses’ Health Study II (n=90,629). Multivariable models were adjusted for age, body mass index, diet, and other factors. We also analyzed cross-sectional associations with 24-hour urine (n=6129). Results During 3,108,264 person-years of follow-up, there were 6308 incident kidney stones. Dairy protein was associated with lower risk in the Nurses’ Health Study II (hazard ratio for highest versus lowest quintile, 0.84; 95% confidence interval, 0.73 to 0.96; P value for trend <0.01). The hazard ratios for nondairy animal protein were 1.15 (95% confidence interval, 0.97 to 1.36; P value for trend =0.04) in the Health Professionals Follow-Up Study and 1.20 (95% confidence interval, 0.99 to 1.46; P value for trend =0.06) in the Nurses’ Health Study I. Potassium intake was associated with lower risk in all three cohorts (hazard ratios from 0.44 [95% confidence interval, 0.36 to 0.53] to 0.67 [95% confidence interval, 0.57 to 0.78]; P values for trend <0.001). Animal protein-to-potassium ratio was associated with higher risk (P value for trend =0.004), even after adjustment for animal protein and potassium. Higher dietary potassium was associated with higher urine citrate, pH, and volume (P values for trend <0.002). Conclusions Kidney stone risk may vary by protein type. Diets high in potassium or with a relative abundance of potassium compared with animal protein could represent a means of stone prevention. PMID:27445166

  2. genepop'007: a complete re-implementation of the genepop software for Windows and Linux.

    PubMed

    Rousset, François

    2008-01-01

    This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options. genepop now runs under Linux as well as under Windows, and can be entirely controlled by batch calls. © 2007 The Author.

  3. Health significance and statistical uncertainty. The value of P-value.

    PubMed

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P<0.05" (defined as "statistically significant") and "P>0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  4. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  5. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  6. Bootstrapping Confidence Intervals for Robust Measures of Association.

    ERIC Educational Resources Information Center

    King, Jason E.

    A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…

  7. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  8. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  9. Age-dependent biochemical quantities: an approach for calculating reference intervals.

    PubMed

    Bjerner, J

    2007-01-01

    A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.

  10. Postoperative maintenance levonorgestrel-releasing intrauterine system and endometrioma recurrence: a randomized controlled study.

    PubMed

    Chen, Yi-Jen; Hsu, Teh-Fu; Huang, Ben-Shian; Tsai, Hsiao-Wen; Chang, Yen-Hou; Wang, Peng-Hui

    2017-06-01

    According to 3 randomized trials, the levonorgestrel-releasing intrauterine system significantly reduced recurrent endometriosis-related pelvic pain at postoperative year 1. Only a few studies have evaluated the long-term effectiveness of the device for preventing endometrioma recurrence, and the effects of a levonorgestrel-releasing intrauterine system as a maintenance therapy remain unclear. The objective of the study was to evaluate whether a maintenance levonorgestrel-releasing intrauterine system is effective for preventing postoperative endometrioma recurrence. From May 2011 through March 2012, a randomized controlled trial including 80 patients with endometriomas undergoing laparoscopic cystectomy followed by six cycles of gonadotropin-releasing hormone agonist treatment was conducted. After surgery, the patients were randomized to groups that did or did not receive a levonorgestrel-releasing intrauterine system (intervention group, n = 40, vs control group, n = 40). The primary outcome was endometrioma recurrence 30 months after surgery. The secondary outcomes included dysmenorrhea, CA125 levels, noncyclic pelvic pain, and side effects. Endometrioma recurrence at 30 months did not significantly differ between the 2 groups (the intervention group, 10 of 40, 25% vs the control group 15 of 40, 37.5%; hazard ratio, 0.60, 95% confidence interval, 0.27-1.33, P = .209). The intervention group exhibited a lower dysmenorrhea recurrence rate, with an estimated hazard ratio of 0.32 (95% confidence interval, 0.12-0.83, P = .019). Over a 30 month follow-up, the intervention group exhibited a greater reduction in dysmenorrhea as assessed with a visual analog scale score (mean ± SD, 60.8 ± 25.5 vs 38.7 ± 25.9, P < .001, 95% confidence interval, 10.7-33.5), noncyclic pelvic pain visual analog scale score (39.1 ± 10.9 vs 30.1 ± 14.7, P = .014, 95% confidence interval, 1.9-16.1), and CA125 (median [interquartile range], -32.1 [-59.1 to 14.9], vs -15.6 [-33.0 to 5.0], P = .001) compared with the control group. The number-needed-to-treat benefit for dysmenorrhea recurrence at 30 months was 5. The number of recurrent cases requiring further surgical or hormone treatment in the intervention group (1 of 40, 2.5%, 95% confidence interval, -2.3% to 7.3%) was significantly lower than that in the control group (8 of 40, 20%, 95% confidence interval, 7.6-32.4%; P = .031). Long-term maintenance therapy using a levonorgestrel-releasing intrauterine system is not effective for preventing endometrioma recurrence. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Methane Emissions from the Natural Gas Transmission and Storage System in the United States.

    PubMed

    Zimmerle, Daniel J; Williams, Laurie L; Vaughn, Timothy L; Quinn, Casey; Subramanian, R; Duggan, Gerald P; Willson, Bryan; Opsomer, Jean D; Marchese, Anthony J; Martinez, David M; Robinson, Allen L

    2015-08-04

    The recent growth in production and utilization of natural gas offers potential climate benefits, but those benefits depend on lifecycle emissions of methane, the primary component of natural gas and a potent greenhouse gas. This study estimates methane emissions from the transmission and storage (T&S) sector of the United States natural gas industry using new data collected during 2012, including 2,292 onsite measurements, additional emissions data from 677 facilities and activity data from 922 facilities. The largest emission sources were fugitive emissions from certain compressor-related equipment and "super-emitter" facilities. We estimate total methane emissions from the T&S sector at 1,503 [1,220 to 1,950] Gg/yr (95% confidence interval) compared to the 2012 Environmental Protection Agency's Greenhouse Gas Inventory (GHGI) estimate of 2,071 [1,680 to 2,690] Gg/yr. While the overlap in confidence intervals indicates that the difference is not statistically significant, this is the result of several significant, but offsetting, factors. Factors which reduce the study estimate include a lower estimated facility count, a shift away from engines toward lower-emitting turbine and electric compressor drivers, and reductions in the usage of gas-driven pneumatic devices. Factors that increase the study estimate relative to the GHGI include updated emission rates in certain emission categories and explicit treatment of skewed emissions at both component and facility levels. For T&S stations that are required to report to the EPA's Greenhouse Gas Reporting Program (GHGRP), this study estimates total emissions to be 260% [215% to 330%] of the reportable emissions for these stations, primarily due to the inclusion of emission sources that are not reported under the GHGRP rules, updated emission factors, and super-emitter emissions.

  12. A probabilistic method for testing and estimating selection differences between populations.

    PubMed

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Blood transfusion for preventing primary and secondary stroke in people with sickle cell disease.

    PubMed

    Estcourt, Lise J; Fortin, Patricia M; Hopewell, Sally; Trivella, Marialena; Wang, Winfred C

    2017-01-17

    Sickle cell disease is one of the commonest severe monogenic disorders in the world, due to the inheritance of two abnormal haemoglobin (beta globin) genes. Sickle cell disease can cause severe pain, significant end-organ damage, pulmonary complications, and premature death. Stroke affects around 10% of children with sickle cell anaemia (HbSS). Chronic blood transfusions may reduce the risk of vaso-occlusion and stroke by diluting the proportion of sickled cells in the circulation.This is an update of a Cochrane Review first published in 2002, and last updated in 2013. To assess risks and benefits of chronic blood transfusion regimens in people with sickle cell disease for primary and secondary stroke prevention (excluding silent cerebral infarcts). We searched for relevant trials in the Cochrane Library, MEDLINE (from 1946), Embase (from 1974), the Transfusion Evidence Library (from 1980), and ongoing trial databases; all searches current to 04 April 2016.We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register: 25 April 2016. Randomised controlled trials comparing red blood cell transfusions as prophylaxis for stroke in people with sickle cell disease to alternative or standard treatment. There were no restrictions by outcomes examined, language or publication status. Two authors independently assessed trial eligibility and the risk of bias and extracted data. We included five trials (660 participants) published between 1998 and 2016. Four of these trials were terminated early. The vast majority of participants had the haemoglobin (Hb)SS form of sickle cell disease.Three trials compared regular red cell transfusions to standard care in primary prevention of stroke: two in children with no previous long-term transfusions; and one in children and adolescents on long-term transfusion.Two trials compared the drug hydroxyurea (hydroxycarbamide) and phlebotomy to long-term transfusions and iron chelation therapy: one in primary prevention (children); and one in secondary prevention (children and adolescents).The quality of the evidence was very low to moderate across different outcomes according to GRADE methodology. This was due to the trials being at a high risk of bias due to lack of blinding, indirectness and imprecise outcome estimates. Red cell transfusions versus standard care Children with no previous long-term transfusionsLong-term transfusions probably reduce the incidence of clinical stroke in children with a higher risk of stroke (abnormal transcranial doppler velocities or previous history of silent cerebral infarct), risk ratio 0.12 (95% confidence interval 0.03 to 0.49) (two trials, 326 participants), moderate quality evidence.Long-term transfusions may: reduce the incidence of other sickle cell disease-related complications (acute chest syndrome, risk ratio 0.24 (95% confidence interval 0.12 to 0.48)) (two trials, 326 participants); increase quality of life (difference estimate -0.54, 95% confidence interval -0.92 to -0.17) (one trial, 166 participants); but make little or no difference to IQ scores (least square mean: 1.7, standard error 95% confidence interval -1.1 to 4.4) (one trial, 166 participants), low quality evidence.We are very uncertain whether long-term transfusions: reduce the risk of transient ischaemic attacks, Peto odds ratio 0.13 (95% confidence interval 0.01 to 2.11) (two trials, 323 participants); have any effect on all-cause mortality, no deaths reported (two trials, 326 participants); or increase the risk of alloimmunisation, risk ratio 3.16 (95% confidence interval 0.18 to 57.17) (one trial, 121 participants), very low quality evidence. Children and adolescents with previous long-term transfusions (one trial, 79 participants)We are very uncertain whether continuing long-term transfusions reduces the incidence of: stroke, risk ratio 0.22 (95% confidence interval 0.01 to 4.35); or all-cause mortality, Peto odds ratio 8.00 (95% confidence interval 0.16 to 404.12), very low quality evidence.Several review outcomes were only reported in one trial arm (sickle cell disease-related complications, alloimmunisation, transient ischaemic attacks).The trial did not report neurological impairment, or quality of life. Hydroxyurea and phlebotomy versus red cell transfusions and chelationNeither trial reported on neurological impairment, alloimmunisation, or quality of life. Primary prevention, children (one trial, 121 participants)Switching to hydroxyurea and phlebotomy may have little or no effect on liver iron concentrations, mean difference -1.80 mg Fe/g dry-weight liver (95% confidence interval -5.16 to 1.56), low quality evidence.We are very uncertain whether switching to hydroxyurea and phlebotomy has any effect on: risk of stroke (no strokes); all-cause mortality (no deaths); transient ischaemic attacks, risk ratio 1.02 (95% confidence interval 0.21 to 4.84); or other sickle cell disease-related complications (acute chest syndrome, risk ratio 2.03 (95% confidence interval 0.39 to 10.69)), very low quality evidence. Secondary prevention, children and adolescents (one trial, 133 participants)Switching to hydroxyurea and phlebotomy may: increase the risk of sickle cell disease-related serious adverse events, risk ratio 3.10 (95% confidence interval 1.42 to 6.75); but have little or no effect on median liver iron concentrations (hydroxyurea, 17.3 mg Fe/g dry-weight liver (interquartile range 10.0 to 30.6)); transfusion 17.3 mg Fe/g dry-weight liver (interquartile range 8.8 to 30.7), low quality evidence.We are very uncertain whether switching to hydroxyurea and phlebotomy: increases the risk of stroke, risk ratio 14.78 (95% confidence interval 0.86 to 253.66); or has any effect on all-cause mortality, Peto odds ratio 0.98 (95% confidence interval 0.06 to 15.92); or transient ischaemic attacks, risk ratio 0.66 (95% confidence interval 0.25 to 1.74), very low quality evidence. There is no evidence for managing adults, or children who do not have HbSS sickle cell disease.In children who are at higher risk of stroke and have not had previous long-term transfusions, there is moderate quality evidence that long-term red cell transfusions reduce the risk of stroke, and low quality evidence they also reduce the risk of other sickle cell disease-related complications.In primary and secondary prevention of stroke there is low quality evidence that switching to hydroxyurea with phlebotomy has little or no effect on the liver iron concentration.In secondary prevention of stroke there is low-quality evidence that switching to hydroxyurea with phlebotomy increases the risk of sickle cell disease-related events.All other evidence in this review is of very low quality.

  14. Blood transfusion for preventing primary and secondary stroke in people with sickle cell disease

    PubMed Central

    Estcourt, Lise J; Fortin, Patricia M; Hopewell, Sally; Trivella, Marialena; Wang, Winfred C

    2017-01-01

    Background Sickle cell disease is one of the commonest severe monogenic disorders in the world, due to the inheritance of two abnormal haemoglobin (beta globin) genes. Sickle cell disease can cause severe pain, significant end-organ damage, pulmonary complications, and premature death. Stroke affects around 10% of children with sickle cell anaemia (HbSS). Chronic blood transfusions may reduce the risk of vaso-occlusion and stroke by diluting the proportion of sickled cells in the circulation. This is an update of a Cochrane Review first published in 2002, and last updated in 2013. Objectives To assess risks and benefits of chronic blood transfusion regimens in people with sickle cell disease for primary and secondary stroke prevention (excluding silent cerebral infarcts). Search methods We searched for relevant trials in the Cochrane Library, MEDLINE (from 1946), Embase (from 1974), the Transfusion Evidence Library (from 1980), and ongoing trial databases; all searches current to 04 April 2016. We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Haemoglobinopathies Trials Register: 25 April 2016. Selection criteria Randomised controlled trials comparing red blood cell transfusions as prophylaxis for stroke in people with sickle cell disease to alternative or standard treatment. There were no restrictions by outcomes examined, language or publication status. Data collection and analysis Two authors independently assessed trial eligibility and the risk of bias and extracted data. Main results We included five trials (660 participants) published between 1998 and 2016. Four of these trials were terminated early. The vast majority of participants had the haemoglobin (Hb)SS form of sickle cell disease. Three trials compared regular red cell transfusions to standard care in primary prevention of stroke: two in children with no previous long-term transfusions; and one in children and adolescents on long-term transfusion. Two trials compared the drug hydroxyurea (hydroxycarbamide) and phlebotomy to long-term transfusions and iron chelation therapy: one in primary prevention (children); and one in secondary prevention (children and adolescents). The quality of the evidence was very low to moderate across different outcomes according to GRADE methodology. This was due to the trials being at a high risk of bias due to lack of blinding, indirectness and imprecise outcome estimates. Red cell transfusions versus standard care Children with no previous long-term transfusions Long-term transfusions probably reduce the incidence of clinical stroke in children with a higher risk of stroke (abnormal transcranial doppler velocities or previous history of silent cerebral infarct), risk ratio 0.12 (95% confidence interval 0.03 to 0.49) (two trials, 326 participants), moderate quality evidence. Long-term transfusions may: reduce the incidence of other sickle cell disease-related complications (acute chest syndrome, risk ratio 0.24 (95% confidence interval 0.12 to 0.48)) (two trials, 326 participants); increase quality of life (difference estimate -0.54, 95% confidence interval -0.92 to -0.17) (one trial, 166 participants); but make little or no difference to IQ scores (least square mean: 1.7, standard error 95% confidence interval -1.1 to 4.4) (one trial, 166 participants), low quality evidence. We are very uncertain whether long-term transfusions: reduce the risk of transient ischaemic attacks, Peto odds ratio 0.13 (95% confidence interval 0.01 to 2.11) (two trials, 323 participants); have any effect on all-cause mortality, no deaths reported (two trials, 326 participants); or increase the risk of alloimmunisation, risk ratio 3.16 (95% confidence interval 0.18 to 57.17) (one trial, 121 participants), very low quality evidence. Children and adolescents with previous long-term transfusions (one trial, 79 participants) We are very uncertain whether continuing long-term transfusions reduces the incidence of: stroke, risk ratio 0.22 (95% confidence interval 0.01 to 4.35); or all-cause mortality, Peto odds ratio 8.00 (95% confidence interval 0.16 to 404.12), very low quality evidence. Several review outcomes were only reported in one trial arm (sickle cell disease-related complications, alloimmunisation, transient ischaemic attacks). The trial did not report neurological impairment, or quality of life. Hydroxyurea and phlebotomy versus red cell transfusions and chelation Neither trial reported on neurological impairment, alloimmunisation, or quality of life. Primary prevention, children (one trial, 121 participants) Switching to hydroxyurea and phlebotomy may have little or no effect on liver iron concentrations, mean difference -1.80 mg Fe/g dry-weight liver (95% confidence interval -5.16 to 1.56), low quality evidence. We are very uncertain whether switching to hydroxyurea and phlebotomy has any effect on: risk of stroke (no strokes); all-cause mortality (no deaths); transient ischaemic attacks, risk ratio 1.02 (95% confidence interval 0.21 to 4.84); or other sickle cell disease-related complications (acute chest syndrome, risk ratio 2.03 (95% confidence interval 0.39 to 10.69)), very low quality evidence. Secondary prevention, children and adolescents (one trial, 133 participants) Switching to hydroxyurea and phlebotomy may: increase the risk of sickle cell disease-related serious adverse events, risk ratio 3.10 (95% confidence interval 1.42 to 6.75); but have little or no effect on median liver iron concentrations (hydroxyurea, 17.3 mg Fe/g dry-weight liver (interquartile range 10.0 to 30.6)); transfusion 17.3 mg Fe/g dry-weight liver (interquartile range 8.8 to 30.7), low quality evidence. We are very uncertain whether switching to hydroxyurea and phlebotomy: increases the risk of stroke, risk ratio 14.78 (95% confidence interval 0.86 to 253.66); or has any effect on all-cause mortality, Peto odds ratio 0.98 (95% confidence interval 0.06 to 15.92); or transient ischaemic attacks, risk ratio 0.66 (95% confidence interval 0.25 to 1.74), very low quality evidence. Authors’ conclusions There is no evidence for managing adults, or children who do not have HbSS sickle cell disease. In children who are at higher risk of stroke and have not had previous long-term transfusions, there is moderate quality evidence that long-term red cell transfusions reduce the risk of stroke, and low quality evidence they also reduce the risk of other sickle cell disease-related complications. In primary and secondary prevention of stroke there is low quality evidence that switching to hydroxyurea with phlebotomy has little or no effect on the liver iron concentration. In secondary prevention of stroke there is low-quality evidence that switching to hydroxyurea with phlebotomy increases the risk of sickle cell disease-related events. All other evidence in this review is of very low quality. PMID:24226646

  15. Implementation Of Fuzzy Approach To Improve Time Estimation [Case Study Of A Thermal Power Plant Is Considered

    NASA Astrophysics Data System (ADS)

    Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.

    2010-10-01

    Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.

  16. Race, Ethnicity, Language, Social Class, and Health Communication Inequalities: A Nationally-Representative Cross-Sectional Study

    PubMed Central

    Viswanath, Kasisomayajula; Ackerson, Leland K.

    2011-01-01

    Background While mass media communications can be an important source of health information, there are substantial social disparities in health knowledge that may be related to media use. The purpose of this study is to investigate how the use of cancer-related health communications is patterned by race, ethnicity, language, and social class. Methodology/Principal Findings In a nationally-representative cross-sectional telephone survey, 5,187 U.S. adults provided information about demographic characteristics, cancer information seeking, and attention to and trust in health information from television, radio, newspaper, magazines, and the Internet. Cancer information seeking was lowest among Spanish-speaking Hispanics (odds ratio: 0.42; 95% confidence interval: 0.28–0.63) compared to non-Hispanic whites. Spanish-speaking Hispanics were more likely than non-Hispanic whites to pay attention to (odds ratio: 3.10; 95% confidence interval: 2.07–4.66) and trust (odds ratio: 2.61; 95% confidence interval: 1.53–4.47) health messages from the radio. Non-Hispanic blacks were more likely than non-Hispanic whites to pay attention to (odds ratio: 2.39; 95% confidence interval: 1.88–3.04) and trust (odds ratio: 2.16; 95% confidence interval: 1.61–2.90) health messages on television. Those who were college graduates tended to pay more attention to health information from newspapers (odds ratio: 1.98; 95% confidence interval: 1.42–2.75), magazines (odds ratio: 1.86; 95% confidence interval: 1.32–2.60), and the Internet (odds ratio: 4.74; 95% confidence interval: 2.70–8.31) and had less trust in cancer-related health information from television (odds ratio: 0.44; 95% confidence interval: 0.32–0.62) and radio (odds ratio: 0.54; 95% confidence interval: 0.34–0.86) compared to those who were not high school graduates. Conclusions/Significance Health media use is patterned by race, ethnicity, language and social class. Providing greater access to and enhancing the quality of health media by taking into account factors associated with social determinants may contribute to addressing social disparities in health. PMID:21267450

  17. GONe: Software for estimating effective population size in species with generational overlap

    USGS Publications Warehouse

    Coombs, J.A.; Letcher, B.H.; Nislow, K.H.

    2012-01-01

    GONe is a user-friendly, Windows-based program for estimating effective size (N e) in populations with overlapping generations. It uses the Jorde-Ryman modification to the temporal method to account for age structure in populations. This method requires estimates of age-specific survival and birth rate and allele frequencies measured in two or more consecutive cohorts. Allele frequencies are acquired by reading in genotypic data from files formatted for either GENEPOP or TEMPOFS. For each interval between consecutive cohorts, N e is estimated at each locus and over all loci. Furthermore, N e estimates are output for three different genetic drift estimators (F s, F c and F k). Confidence intervals are derived from a chi-square distribution with degrees of freedom equal to the number of independent alleles. GONe has been validated over a wide range of N e values, and for scenarios where survival and birth rates differ between sexes, sex ratios are unequal and reproductive variances differ. GONe is freely available for download at. ?? 2011 Blackwell Publishing Ltd.

  18. Using Replicates in Information Retrieval Evaluation.

    PubMed

    Voorhees, Ellen M; Samarov, Daniel; Soboroff, Ian

    2017-09-01

    This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions-something not possible without replicates-yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness.

  19. HIV antibody seroprevalence among prisoners entering the California correctional system.

    PubMed Central

    Singleton, J. A.; Perkins, C. I.; Trachtenberg, A. I.; Hughes, M. J.; Kizer, K. W.; Ascher, M.

    1990-01-01

    A cross-sectional blind study was conducted in the spring of 1988 to estimate the extent of human immunodeficiency virus (HIV) infection among inmates entering the California correctional system. Of the 6,834 inmates receiving entrance physical examinations during the study period, 6,179 (90.4%) had serum tested for the presence of HIV antibodies after routine blood work was completed and personal identifiers were removed. Seroprevalence was 2.5% (95% confidence interval, 2.1% to 3.0%) among the 5,372 men tested and 3.1% (95% confidence interval, 2.1% to 4.5%) among the 807 women tested. Seroprevalence was more than twice as high among men arrested in the San Francisco Bay Area as in those arrested elsewhere in the state. The regional differences in HIV seroprevalence observed among entering inmates mirror infection rates reported among intravenous drug users from the same regions. PMID:2244374

  20. DNA Barcode Authentication of Saw Palmetto Herbal Dietary Supplements

    PubMed Central

    Little, Damon P.; Jeanson, Marc L.

    2013-01-01

    Herbal dietary supplements made from saw palmetto (Serenoa repens; Arecaceae) fruit are commonly consumed to ameliorate benign prostate hyperplasia. A novel DNA mini–barcode assay to accurately identify [specificity = 1.00 (95% confidence interval = 0.74–1.00); sensitivity = 1.00 (95% confidence interval = 0.66–1.00); n = 31] saw palmetto dietary supplements was designed from a DNA barcode reference library created for this purpose. The mini–barcodes were used to estimate the frequency of mislabeled saw palmetto herbal dietary supplements on the market in the United States of America. Of the 37 supplements examined, amplifiable DNA could be extracted from 34 (92%). Mini–barcode analysis of these supplements demonstrated that 29 (85%) contain saw palmetto and that 2 (6%) supplements contain related species that cannot be legally sold as herbal dietary supplements in the United States of America. The identity of 3 (9%) supplements could not be conclusively determined. PMID:24343362

  1. Effects of traffic-related outdoor air pollution on respiratory illness and mortality in children, taking into account indoor air pollution, in Indonesia.

    PubMed

    Kashima, Saori; Yorifuji, Takashi; Tsuda, Toshihide; Ibrahim, Juliani; Doi, Hiroyuki

    2010-03-01

    To evaluate the effects of outdoor air pollution, taking into account indoor air pollution, in Indonesia. The subjects were 15,242 children from 2002 to 2003 Indonesia Demographic and Health Survey. The odds ratios and their confidence intervals for adverse health effects were estimated. Proximity increased the prevalence of acute respiratory infection both in urban and rural areas after adjusting for indoor air pollution. In urban areas, the prevalence of acute upper respiratory infection increased by 1.012 (95% confidence intervals: 1.005 to 1.019) per 2 km proximity to a major road. Adjusted odds ratios tended to be higher in the high indoor air pollution group. Exposure to traffic-related outdoor air pollution would increase adverse health effects after adjusting for indoor air pollution. Furthermore, indoor air pollution could exacerbate the effects of outdoor air pollution.

  2. Arsenic exposure and oral cavity lesions in Bangladesh.

    PubMed

    Syed, Emdadul H; Melkonian, Stephanie; Poudel, Krishna C; Yasuoka, Junko; Otsuka, Keiko; Ahmed, Alauddin; Islam, Tariqul; Parvez, Faruque; Slavkovich, Vesna; Graziano, Joseph H; Ahsan, Habibul; Jimba, Masamine

    2013-01-01

    To evaluate the relationship between arsenic exposure and oral cavity lesions among an arsenic-exposed population in Bangladesh. We carried out an analysis utilizing the baseline data of the Health Effects of Arsenic Exposure Longitudinal Study, which is an ongoing population-based cohort study to investigate health outcomes associated with arsenic exposure via drinking water in Araihazar, Bangladesh. We used multinomial regression models to estimate the risk of oral cavity lesions. Participants with high urinary arsenic levels (286.1 to 5000.0 μg/g) were more likely to develop arsenical lesions of the gums (multinomial odds ratio = 2.90; 95% confidence interval, 1.11 to 7.54), and tongue (multinomial odds ratio = 2.79; 95% confidence interval, 1.51 to 5.15), compared with those with urinary arsenic levels of 7.0 to 134.0 μg/g. Higher level of arsenic exposure was positively associated with increased arsenical lesions of the gums and tongue.

  3. Medium to long-term results of the UNIX uncemented unicompartmental knee replacement.

    PubMed

    Hall, Matthew J; Connell, David A; Morris, Hayden G

    2013-10-01

    We report the first non-designer study of the Unix uncemented unicompartmental knee prosthesis. Eighty-five consecutive UKRs were carried out with sixty-five available for follow-up. Oxford Knee Scores, WOMAC questionnaire and radiological assessment were completed. The mean Oxford Knee Score was thirty-eight and WOMAC Score was twenty. Overall Kaplan Meier survival estimate is 76% (95% confidence interval 60%-97%) at 12years and 88% (95% confidence interval 76-100%) with aseptic loosening as the endpoint. Radiographic assessment showed lysis in the tibia in 6% of patients with no lysis evident around the central fin. Survivorship is comparable to other published series of UKRs. We suggest the central fin design is key to dissipating large forces throughout the proximal tibia, resulting in low levels of tibial loosening. Level of evidence IV. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Reproductive Outcomes Among Women Exposed to a Brominated Flame Retardant In Utero

    PubMed Central

    Small, Chanley M.; Murray, Deanna; Terrell, Metrecia L.; Marcus, Michele

    2014-01-01

    The authors studied 194 women exposed to polybrominated biphenyls (PBB) in utero when their mothers consumed products accidentally contaminated in Michigan in 1973. Generalized estimating equations were used to examine the effect of in utero PBB exposure on adult pregnancy-related outcomes. Compared to those with the lowest exposure (≤1 ppb), those with mid-range (>1–3.16 ppb) and high (≥3.17 ppb) PBB exposure had increased odds of spontaneous abortion with wide confidence intervals (odds ratio [OR] = 2.75, 95% confidence interval [CI] = 0.64–11.79, OR = 4.08, 95% CI = 0.94–17.70; respectively; p for trend = .05). Exposure during infancy to PBB-contaminated breast milk further increased this risk. Time to pregnancy and infertility were not associated with in utero exposure to PBB. Future studies should examine the suggested relationship between spontaneous abortion and other brominated flame retardants. PMID:22014192

  5. SLDAssay: A software package and web tool for analyzing limiting dilution assays.

    PubMed

    Trumble, Ilana M; Allmon, Andrew G; Archin, Nancie M; Rigdon, Joseph; Francis, Owen; Baldoni, Pedro L; Hudgens, Michael G

    2017-11-01

    Serial limiting dilution (SLD) assays are used in many areas of infectious disease related research. This paper presents SLDAssay, a free and publicly available R software package and web tool for analyzing data from SLD assays. SLDAssay computes the maximum likelihood estimate (MLE) for the concentration of target cells, with corresponding exact and asymptotic confidence intervals. Exact and asymptotic goodness of fit p-values, and a bias-corrected (BC) MLE are also provided. No other publicly available software currently implements the BC MLE or the exact methods. For validation of SLDAssay, results from Myers et al. (1994) are replicated. Simulations demonstrate the BC MLE is less biased than the MLE. Additionally, simulations demonstrate that exact methods tend to give better confidence interval coverage and goodness-of-fit tests with lower type I error than the asymptotic methods. Additional advantages of using exact methods are also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Methods for the behavioral, educational, and social sciences: an R package.

    PubMed

    Kelley, Ken

    2007-11-01

    Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.

  7. Using Replicates in Information Retrieval Evaluation

    PubMed Central

    VOORHEES, ELLEN M.; SAMAROV, DANIEL; SOBOROFF, IAN

    2018-01-01

    This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions—something not possible without replicates—yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness. PMID:29905334

  8. DNA barcode authentication of saw palmetto herbal dietary supplements.

    PubMed

    Little, Damon P; Jeanson, Marc L

    2013-12-17

    Herbal dietary supplements made from saw palmetto (Serenoa repens; Arecaceae) fruit are commonly consumed to ameliorate benign prostate hyperplasia. A novel DNA mini-barcode assay to accurately identify [specificity = 1.00 (95% confidence interval = 0.74-1.00); sensitivity = 1.00 (95% confidence interval = 0.66-1.00); n = 31] saw palmetto dietary supplements was designed from a DNA barcode reference library created for this purpose. The mini-barcodes were used to estimate the frequency of mislabeled saw palmetto herbal dietary supplements on the market in the United States of America. Of the 37 supplements examined, amplifiable DNA could be extracted from 34 (92%). Mini-barcode analysis of these supplements demonstrated that 29 (85%) contain saw palmetto and that 2 (6%) supplements contain related species that cannot be legally sold as herbal dietary supplements in the United States of America. The identity of 3 (9%) supplements could not be conclusively determined.

  9. Prevalence of beryllium sensitization among Department of Defense conventional munitions workers at low risk for exposure.

    PubMed

    Mikulski, Marek A; Sanderson, Wayne T; Leonard, Stephanie A; Lourens, Spencer; Field, R William; Sprince, Nancy L; Fuortes, Laurence J

    2011-03-01

    To estimate the prevalence of beryllium sensitization among former and current Department of Defense workers from a conventional munitions facility. Participants were screened by using Beryllium Lymphocyte Proliferation Test. Those sensitized were offered clinical evaluation for chronic beryllium disease. Eight (1.5%) of 524 screened workers were found sensitized to beryllium. Although the confidence interval was wide, the results suggested a possibly higher risk of sensitization among workers exposed to beryllium by occasional resurfacing of copper-2% beryllium alloy tools compared with workers with the lowest potential exposure (odds ratio = 2.6; 95% confidence interval, 0.23-29.9). The findings from this study suggest that Department of Defense workers with low overall exposure to beryllium had a low prevalence of beryllium sensitization. Sensitization rates might be higher where higher beryllium exposures presumably occurred, although this study lacked sufficient power to confirm this.

  10. An alternative approach to confidence interval estimation for the win ratio statistic.

    PubMed

    Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann

    2015-03-01

    Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes. © 2014, The International Biometric Society.

  11. Estimating the optimal dynamic antipsychotic treatment regime: Evidence from the sequential multiple assignment randomized CATIE Schizophrenia Study

    PubMed Central

    Shortreed, Susan M.; Moodie, Erica E. M.

    2012-01-01

    Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488

  12. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  13. Energy expenditure estimation during daily military routine with body-fixed sensors.

    PubMed

    Wyss, Thomas; Mäder, Urs

    2011-05-01

    The purpose of this study was to develop and validate an algorithm for estimating energy expenditure during the daily military routine on the basis of data collected using body-fixed sensors. First, 8 volunteers completed isolated physical activities according to an established protocol, and the resulting data were used to develop activity-class-specific multiple linear regressions for physical activity energy expenditure on the basis of hip acceleration, heart rate, and body mass as independent variables. Second, the validity of these linear regressions was tested during the daily military routine using indirect calorimetry (n = 12). Volunteers' mean estimated energy expenditure did not significantly differ from the energy expenditure measured with indirect calorimetry (p = 0.898, 95% confidence interval = -1.97 to 1.75 kJ/min). We conclude that the developed activity-class-specific multiple linear regressions applied to the acceleration and heart rate data allow estimation of energy expenditure in 1-minute intervals during daily military routine, with accuracy equal to indirect calorimetry.

  14. Evaluation of line transect sampling based on remotely sensed data from underwater video

    USGS Publications Warehouse

    Bergstedt, R.A.; Anderson, D.R.

    1990-01-01

    We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.

  15. Regression Equations for Estimating Flood Flows at Selected Recurrence Intervals for Ungaged Streams in Pennsylvania

    USGS Publications Warehouse

    Roland, Mark A.; Stuckey, Marla H.

    2008-01-01

    Regression equations were developed for estimating flood flows at selected recurrence intervals for ungaged streams in Pennsylvania with drainage areas less than 2,000 square miles. These equations were developed utilizing peak-flow data from 322 streamflow-gaging stations within Pennsylvania and surrounding states. All stations used in the development of the equations had 10 or more years of record and included active and discontinued continuous-record as well as crest-stage partial-record stations. The state was divided into four regions, and regional regression equations were developed to estimate the 2-, 5-, 10-, 50-, 100-, and 500-year recurrence-interval flood flows. The equations were developed by means of a regression analysis that utilized basin characteristics and flow data associated with the stations. Significant explanatory variables at the 95-percent confidence level for one or more regression equations included the following basin characteristics: drainage area; mean basin elevation; and the percentages of carbonate bedrock, urban area, and storage within a basin. The regression equations can be used to predict the magnitude of flood flows for specified recurrence intervals for most streams in the state; however, they are not valid for streams with drainage areas generally greater than 2,000 square miles or with substantial regulation, diversion, or mining activity within the basin. Estimates of flood-flow magnitude and frequency for streamflow-gaging stations substantially affected by upstream regulation are also presented.

  16. Opioid analgesia in mechanically ventilated children: results from the multicenter Measuring Opioid Tolerance Induced by Fentanyl study.

    PubMed

    Anand, Kanwaljeet J S; Clark, Amy E; Willson, Douglas F; Berger, John; Meert, Kathleen L; Zimmerman, Jerry J; Harrison, Rick; Carcillo, Joseph A; Newth, Christopher J L; Bisping, Stephanie; Holubkov, Richard; Dean, J Michael; Nicholson, Carol E

    2013-01-01

    To examine the clinical factors associated with increased opioid dose among mechanically ventilated children in the pediatric intensive care unit. Prospective, observational study with 100% accrual of eligible patients. Seven pediatric intensive care units from tertiary-care children's hospitals in the Collaborative Pediatric Critical Care Research Network. Four hundred nineteen children treated with morphine or fentanyl infusions. None. Data on opioid use, concomitant therapy, demographic and explanatory variables were collected. Significant variability occurred in clinical practices, with up to 100-fold differences in baseline opioid doses, average daily or total doses, or peak infusion rates. Opioid exposure for 7 or 14 days required doubling of the daily opioid dose in 16% patients (95% confidence interval 12%-19%) and 20% patients (95% confidence interval 16%-24%), respectively. Among patients receiving opioids for longer than 3 days (n = 225), this occurred in 28% (95% confidence interval 22%-33%) and 35% (95% confidence interval 29%-41%) by 7 or 14 days, respectively. Doubling of the opioid dose was more likely to occur following opioid infusions for 7 days or longer (odds ratio 7.9, 95% confidence interval 4.3-14.3; p < 0.001) or co-therapy with midazolam (odds ratio 5.6, 95% confidence interval 2.4-12.9; p < 0.001), and it was less likely to occur if morphine was used as the primary opioid (vs. fentanyl) (odds ratio 0.48, 95% confidence interval 0.25-0.92; p = 0.03), for patients receiving higher initial doses (odds ratio 0.96, 95% confidence interval 0.95-0.98; p < 0.001), or if patients had prior pediatric intensive care unit admissions (odds ratio 0.37, 95% confidence interval 0.15-0.89; p = 0.03). Mechanically ventilated children require increasing opioid doses, often associated with prolonged opioid exposure or the need for additional sedation. Efforts to reduce prolonged opioid exposure and clinical practice variation may prevent the complications of opioid therapy.

  17. Funding policies and postabortion long-acting reversible contraception: results from a cluster randomized trial.

    PubMed

    Rocca, Corinne H; Thompson, Kirsten M J; Goodman, Suzan; Westhoff, Carolyn L; Harper, Cynthia C

    2016-06-01

    Almost one-half of women having an abortion in the United States have had a previous procedure, which highlights a failure to provide adequate preventive care. Provision of intrauterine devices and implants, which have high upfront costs, can be uniquely challenging in the abortion care setting. We conducted a study of a clinic-wide training intervention on long-acting reversible contraception and examined the effect of the intervention, insurance coverage, and funding policies on the use of long-acting contraceptives after an abortion. This subanalysis of a cluster, randomized trial examines data from the 648 patients who had undergone an abortion who were recruited from 17 reproductive health centers across the United States. The trial followed participants 18-25 years old who did not desire pregnancy for a year. We measured the effect of the intervention, health insurance, and funding policies on contraceptive outcomes, which included intrauterine device and implant counseling and selection at the abortion visit, with the use of logistic regression with generalized estimating equations for clustering. We used survival analysis to model the actual initiation of these methods over 1 year. Women who obtained abortion care at intervention sites were more likely to report intrauterine device and implant counseling (70% vs 41%; adjusted odds ratio, 3.83; 95% confidence interval, 2.37-6.19) and the selection of these methods (36% vs 21%; adjusted odds ratio, 2.11; 95% confidence interval, 1.39-3.21). However, the actual initiation of methods was similar between study arms (22/100 woman-years each; adjusted hazard ratio, 0.88; 95% confidence interval, 0.51-1.51). Health insurance and funding policies were important for the initiation of intrauterine devices and implants. Compared with uninsured women, those women with public health insurance had a far higher initiation rate (adjusted hazard ratio, 2.18; 95% confidence interval, 1.31-3.62). Women at sites that provide state Medicaid enrollees abortion coverage also had a higher initiation rate (adjusted hazard ratio, 1.73; 95% confidence interval, 1.04-2.88), as did those at sites with state mandates for private health insurance to cover contraception (adjusted hazard ratio, 1.80; 95% confidence interval, 1.06-3.07). Few of the women with private insurance used it to pay for the abortion (28%), but those who did initiated long-acting contraceptive methods at almost twice the rate as women who paid for it themselves or with donated funds (adjusted hazard ratio, 1.94; 95% confidence interval, 1.10-3.43). The clinic-wide training increased long-acting reversible contraceptive counseling and selection but did not change initiation for abortion patients. Long-acting method use after abortion was associated strongly with funding. Restrictions on the coverage of abortion and contraceptives in abortion settings prevent the initiation of desired long-acting methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Mutation and Evolutionary Rates in Adélie Penguins from the Antarctic

    PubMed Central

    Millar, Craig D.; Dodd, Andrew; Anderson, Jennifer; Gibb, Gillian C.; Ritchie, Peter A.; Baroni, Carlo; Woodhams, Michael D.; Hendy, Michael D.; Lambert, David M.

    2008-01-01

    Precise estimations of molecular rates are fundamental to our understanding of the processes of evolution. In principle, mutation and evolutionary rates for neutral regions of the same species are expected to be equal. However, a number of recent studies have shown that mutation rates estimated from pedigree material are much faster than evolutionary rates measured over longer time periods. To resolve this apparent contradiction, we have examined the hypervariable region (HVR I) of the mitochondrial genome using families of Adélie penguins (Pygoscelis adeliae) from the Antarctic. We sequenced 344 bps of the HVR I from penguins comprising 508 families with 915 chicks, together with both their parents. All of the 62 germline heteroplasmies that we detected in mothers were also detected in their offspring, consistent with maternal inheritance. These data give an estimated mutation rate (μ) of 0.55 mutations/site/Myrs (HPD 95% confidence interval of 0.29–0.88 mutations/site/Myrs) after accounting for the persistence of these heteroplasmies and the sensitivity of current detection methods. In comparison, the rate of evolution (k) of the same HVR I region, determined using DNA sequences from 162 known age sub-fossil bones spanning a 37,000-year period, was 0.86 substitutions/site/Myrs (HPD 95% confidence interval of 0.53 and 1.17). Importantly, the latter rate is not statistically different from our estimate of the mutation rate. These results are in contrast to the view that molecular rates are time dependent. PMID:18833304

  19. Pre-Bombing Population Density in Hiroshima and Nagasaki: Its Measurement and Impact on Radiation Risk Estimates in the Life Span Study of Atomic Bomb Survivors.

    PubMed

    French, Benjamin; Funamoto, Sachiyo; Sugiyama, Hiromi; Sakata, Ritsu; Cologne, John; Cullings, Harry M; Mabuchi, Kiyohiko; Preston, Dale L

    2018-03-29

    In the Life Span Study of atomic bomb survivors, differences in urbanicity between high-dose and low-dose survivors could confound the association between radiation dose and adverse outcomes. We obtained data on the pre-bombing population distribution in Hiroshima and Nagasaki, and quantified the impact of adjustment for population density on radiation risk estimates for mortality (1950-2003) and incident solid cancer (1958-2009). Population density ranged from 4,671-14,378 and 5,748-19,149 people/km2 in urban regions of Hiroshima and Nagasaki, respectively. Radiation risk estimates for solid cancer mortality were attenuated by 5.1%, but those for all-cause mortality and incident solid cancer were unchanged. There was no overall association between population density and adverse outcomes, but there was evidence that the association between density and mortality differed by age at exposure. Among survivors 10-14 years old in 1945, there was a positive association between population density and risk of all-cause mortality (relative risk, 1.053 per 5,000 people/km2 increase, 95% confidence interval: 1.027, 1.079) and solid cancer mortality (relative risk, 1.069 per 5,000 people/km2 increase, 95% confidence interval: 1.025, 1.115). Our results suggest that radiation risk estimates from the Life Span Study are not sensitive to unmeasured confounding by urban-rural differences.

  20. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  1. Consumers' estimation of calorie content at fast food restaurants: cross sectional observational study.

    PubMed

    Block, Jason P; Condon, Suzanne K; Kleinman, Ken; Mullen, Jewel; Linakis, Stephanie; Rifas-Shiman, Sheryl; Gillman, Matthew W

    2013-05-23

    To investigate estimation of calorie (energy) content of meals from fast food restaurants in adults, adolescents, and school age children. Cross sectional study of repeated visits to fast food restaurant chains. 89 fast food restaurants in four cities in New England, United States: McDonald's, Burger King, Subway, Wendy's, KFC, Dunkin' Donuts. 1877 adults and 330 school age children visiting restaurants at dinnertime (evening meal) in 2010 and 2011; 1178 adolescents visiting restaurants after school or at lunchtime in 2010 and 2011. Estimated calorie content of purchased meals. Among adults, adolescents, and school age children, the mean actual calorie content of meals was 836 calories (SD 465), 756 calories (SD 455), and 733 calories (SD 359), respectively. A calorie is equivalent to 4.18 kJ. Compared with the actual figures, participants underestimated calorie content by means of 175 calories (95% confidence interval 145 to 205), 259 calories (227 to 291), and 175 calories (108 to 242), respectively. In multivariable linear regression models, underestimation of calorie content increased substantially as the actual meal calorie content increased. Adults and adolescents eating at Subway estimated 20% and 25% lower calorie content than McDonald's diners (relative change 0.80, 95% confidence interval 0.66 to 0.96; 0.75, 0.57 to 0.99). People eating at fast food restaurants underestimate the calorie content of meals, especially large meals. Education of consumers through calorie menu labeling and other outreach efforts might reduce the large degree of underestimation.

  2. Site index curves for white fir in the southwestern United States developed using a guide curve method

    Treesearch

    Robert L. Mathiasen; William K. Olsen; Carleton B. Edminster

    2006-01-01

    Site index curves for white fir (Abies concolor) in Arizona, New Mexico, and southwestern Colorado were developed using height-age measurements and an estimated guide curve and 95% confidence intervals for individual predictions. The curves were developed using height-age data for 1,048 white firs from 263 study sites distributed across eight...

  3. A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn

    2006-01-01

    A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…

  4. Standard Errors and Confidence Intervals from Bootstrapping for Ramsay-Curve Item Response Theory Model Item Parameters

    ERIC Educational Resources Information Center

    Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.

    2011-01-01

    Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…

  5. HDL-cholesterol and the incidence of lung cancer in the Atherosclerosis Risk in Communities (ARIC) study

    PubMed Central

    Kucharska-Newton, Anna M.; Rosamond, Wayne D.; Schroeder, Jane C.; McNeill, Ann Marie; Coresh, Josef; Folsom, Aaron R.

    2008-01-01

    Summary This study examined prospectively the association of baseline plasma HDL-cholesterol levels with incidence of lung cancer in 14, 547 members of the Atherosclerosis Risk in Communities (ARIC) cohort. There were 259 cases of incident lung cancer identified during follow-up from 1987 through 2000. Results of this study indicated a relatively weak inverse association of HDL-cholesterol with lung cancer that was dependent on smoking status. The hazard ratio of lung cancer incidence in relation to low HDL-cholesterol, adjusted for race, gender, exercise, alcohol consumption, body mass index, triglycerides, age, and cigarette pack-years of smoking, was 1.45 (95% confidence interval 1.10, 1.92). This association was observed among former smokers (hazard ratio: 1.77, 95% confidence interval 1.05, 2.97), but not current smokers. The number of cases among never smokers in this study was too small (n=13) for meaningful interpretation of effect estimates. Excluding cases occurring within five years of baseline did not appreciably change the point estimates, suggesting lack of reverse causality. The modest association of low plasma HDL-cholesterol with greater incident lung cancer observed in this study is in agreement with existing case-control studies. PMID:18342390

  6. An evaluation of the NASA/GSFC Barnes field spectral reflectometer model 14-758, using signal/noise as a measure of utility

    NASA Astrophysics Data System (ADS)

    Bell, R.; Labovitz, M. L.

    1982-07-01

    A Barnes field spectral reflectometer which collected information in 373 channels covering the region from 0.4 to 2.5 micrometers was assessed for signal utility. A band was judged unsatisfactory if the probability was 0.1 or greater than its signal to noise ratio was less than eight to one. For each of the bands the probability of a noisy observation was estimated under a binomial assumption from a set of field crop spectra covering an entire growing season. A 95% confidence interval was calculated about each estimate and bands whose lower confidence limits were greater than 0.1 were judged unacceptable. As a result, 283 channels were deemed statistically satisfactory. Excluded channels correspond to portions of the electromagnetic spectrum (EMS) where high atmospheric absorption and filter wheel overlap occur. In addition, the analyses uncovered intervals of unsatisfactory detection capability within the blue, red and far infrared regions of vegetation spectra. From the results of the analysis it was recommended that 90 channels monitored by the instrument under consideration be eliminated from future studies. These channels are tabulated and discussed.

  7. Efficacy of a Clinic-Based Safer Sex Program for Human Immunodeficiency Virus-Uninfected and Human Immunodeficiency Virus-Infected Young Black Men Who Have Sex With Men: A Randomized Controlled Trial.

    PubMed

    Crosby, Richard A; Mena, Leandro; Salazar, Laura F; Hardin, James W; Brown, Tim; Vickers Smith, Rachel

    2018-03-01

    To test the efficacy of a single-session, clinic-based intervention designed to promote condom use among young black men who have sex with men (YBMSM). Six hundred YBMSM were enrolled in a randomized controlled trial, using a 12-month observation period. An intent-to-treat analysis was performed, with multiple imputation for missing data. Compared with the reference group, human immunodeficiency virus (HIV)-infected men in the intervention group had 64% greater odds of reporting consistent condom use for anal receptive sex over 12 months (estimated odds ratio, 1.64; 95% confidence interval, 1.23-2.17, P = 0.001). Also, compared with the reference group, HIV-uninfected men in the intervention group had more than twice the odds of reporting consistent condom use for anal receptive sex over 12 months (estimated odds ratio, 2.14; 95% confidence interval, 1.74-2.63, P < 0.001). Significant intervention effects relative to incident sexually transmitted diseases were not observed. A single-session, clinic-based, intervention may help protect HIV-uninfected YBMSM against HIV acquisition and HIV-infected YBMSM from transmitting the virus to insertive partners.

  8. Between-Batch Pharmacokinetic Variability Inflates Type I Error Rate in Conventional Bioequivalence Trials: A Randomized Advair Diskus Clinical Trial.

    PubMed

    Burmeister Getz, E; Carroll, K J; Mielke, J; Benet, L Z; Jones, B

    2017-03-01

    We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)-approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between-batch bio-inequivalence. Here, we provide independent confirmation of pharmacokinetic bio-inequivalence among Advair Diskus 100/50 batches, and quantify residual and between-batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two-way crossover design recommendation. When between-batch pharmacokinetic variability is substantial, the conventional two-way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two-way crossover, which ignores between-batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between-batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.

  9. A Statistical Method for Synthesizing Mediation Analyses Using the Product of Coefficient Approach Across Multiple Trials

    PubMed Central

    Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks

    2016-01-01

    Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330

  10. An evaluation of the NASA/GSFC Barnes field spectral reflecometer model 14-758, using signal/noise as a measure of utility

    NASA Technical Reports Server (NTRS)

    Bell, R.; Labovitz, M. L.

    1982-01-01

    A Barnes field spectral reflectometer which collected information in 373 channels covering the region from 0.4 to 2.5 micrometers was assessed for signal utility. A band was judged unsatisfactory if the probability was 0.1 or greater than its signal to noise ratio was less than eight to one. For each of the bands the probability of a noisy observation was estimated under a binomial assumption from a set of field crop spectra covering an entire growing season. A 95% confidence interval was calculated about each estimate and bands whose lower confidence limits were greater than 0.1 were judged unacceptable. As a result, 283 channels were deemed statistically satisfactory. Excluded channels correspond to portions of the electromagnetic spectrum (EMS) where high atmospheric absorption and filter wheel overlap occur. In addition, the analyses uncovered intervals of unsatisfactory detection capability within the blue, red and far infrared regions of vegetation spectra. From the results of the analysis it was recommended that 90 channels monitored by the instrument under consideration be eliminated from future studies. These channels are tabulated and discussed.

  11. Prediction of cardiovascular outcome by estimated glomerular filtration rate and estimated creatinine clearance in the high-risk hypertension population of the VALUE trial.

    PubMed

    Ruilope, Luis M; Zanchetti, Alberto; Julius, Stevo; McInnes, Gordon T; Segura, Julian; Stolt, Pelle; Hua, Tsushung A; Weber, Michael A; Jamerson, Ken

    2007-07-01

    Reduced renal function is predictive of poor cardiovascular outcomes but the predictive value of different measures of renal function is uncertain. We compared the value of estimated creatinine clearance, using the Cockcroft-Gault formula, with that of estimated glomerular filtration rate (GFR), using the Modification of Diet in Renal Disease (MDRD) formula, as predictors of cardiovascular outcome in 15 245 high-risk hypertensive participants in the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) trial. For the primary end-point, the three secondary end-points and for all-cause death, outcomes were compared for individuals with baseline estimated creatinine clearance and estimated GFR < 60 ml/min and > or = 60 ml/min using hazard ratios and 95% confidence intervals. Coronary heart disease, left ventricular hypertrophy, age, sex and treatment effects were included as covariates in the model. For each end-point considered, the risk in individuals with poor renal function at baseline was greater than in those with better renal function. Estimated creatinine clearance (Cockcroft-Gault) was significantly predictive only of all-cause death [hazard ratio = 1.223, 95% confidence interval (CI) = 1.076-1.390; P = 0.0021] whereas estimated GFR was predictive of all outcomes except stroke. Hazard ratios (95% CIs) for estimated GFR were: primary cardiac end-point, 1.497 (1.332-1.682), P < 0.0001; myocardial infarction, 1.501 (1.254-1.796), P < 0.0001; congestive heart failure, 1.699 (1.435-2.013), P < 0.0001; stroke, 1.152 (0.952-1.394) P = 0.1452; and all-cause death, 1.231 (1.098-1.380), P = 0.0004. These results indicate that estimated glomerular filtration rate calculated with the MDRD formula is more informative than estimated creatinine clearance (Cockcroft-Gault) in the prediction of cardiovascular outcomes.

  12. Risk analysis in cohort studies with heterogeneous strata. A global chi2-test for dose-response relationship, generalizing the Mantel-Haenszel procedure.

    PubMed

    Ahlborn, W; Tuz, H J; Uberla, K

    1990-03-01

    In cohort studies the Mantel-Haenszel estimator ORMH is computed from sample data and is used as a point estimator of relative risk. Test-based confidence intervals are estimated with the help of the asymptotic chi-squared distributed MH-statistic chi 2MHS. The Mantel-extension-chi-squared is used as a test statistic for a dose-response relationship. Both test statistics--the Mantel-Haenszel-chi as well as the Mantel-extension-chi--assume homogeneity of risk across strata, which is rarely present. Also an extended nonparametric statistic, proposed by Terpstra, which is based on the Mann-Whitney-statistics assumes homogeneity of risk across strata. We have earlier defined four risk measures RRkj (k = 1,2,...,4) in the population and considered their estimates and the corresponding asymptotic distributions. In order to overcome the homogeneity assumption we use the delta-method to get "test-based" confidence intervals. Because the four risk measures RRkj are presented as functions of four weights gik we give, consequently, the asymptotic variances of these risk estimators also as functions of the weights gik in a closed form. Approximations to these variances are given. For testing a dose-response relationship we propose a new class of chi 2(1)-distributed global measures Gk and the corresponding global chi 2-test. In contrast to the Mantel-extension-chi homogeneity of risk across strata must not be assumed. These global test statistics are of the Wald type for composite hypotheses.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. The Andrews’ Principles of Risk, Need, and Responsivity as Applied in Drug Abuse Treatment Programs: Meta-Analysis of Crime and Drug Use Outcomes

    PubMed Central

    Prendergast, Michael L.; Pearson, Frank S.; Podus, Deborah; Hamilton, Zachary K.; Greenwell, Lisa

    2013-01-01

    Objectives The purpose of the present meta-analysis was to answer the question: Can the Andrews principles of risk, needs, and responsivity, originally developed for programs that treat offenders, be extended to programs that treat drug abusers? Methods Drawing from a dataset that included 243 independent comparisons, we conducted random-effects meta-regression and ANOVA-analog meta-analyses to test the Andrews principles by averaging crime and drug use outcomes over a diverse set of programs for drug abuse problems. Results For crime outcomes, in the meta-regressions the point estimates for each of the principles were substantial, consistent with previous studies of the Andrews principles. There was also a substantial point estimate for programs exhibiting a greater number of the principles. However, almost all of the 95% confidence intervals included the zero point. For drug use outcomes, in the meta-regressions the point estimates for each of the principles was approximately zero; however, the point estimate for programs exhibiting a greater number of the principles was somewhat positive. All of the estimates for the drug use principles had confidence intervals that included the zero point. Conclusions This study supports previous findings from primary research studies targeting the Andrews principles that those principles are effective in reducing crime outcomes, here in meta-analytic research focused on drug treatment programs. By contrast, programs that follow the principles appear to have very little effect on drug use outcomes. Primary research studies that experimentally test the Andrews principles in drug treatment programs are recommended. PMID:24058325

  14. Vascular Disease, ESRD, and Death: Interpreting Competing Risk Analyses

    PubMed Central

    Coresh, Josef; Segev, Dorry L.; Kucirka, Lauren M.; Tighiouart, Hocine; Sarnak, Mark J.

    2012-01-01

    Summary Background and objectives Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. Design, setting, participants, & measurements This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989–1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. Results The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20–2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15–2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. Conclusions When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors. PMID:22859747

  15. Burden of Dengue Infection and Disease in a Pediatric Cohort in Urban Sri Lanka

    PubMed Central

    Tissera, Hasitha; Amarasinghe, Ananda; De Silva, Aruna Dharshan; Kariyawasam, Pradeep; Corbett, Kizzmekia S.; Katzelnick, Leah; Tam, Clarence; Letson, G. William; Margolis, Harold S.; de Silva, Aravinda M.

    2014-01-01

    Dengue is the most significant arthropod-borne viral infection of humans. Persons infected with dengue viruses (DENV) have subclinical or clinically apparent infections ranging from undifferentiated fever to dengue hemorrhagic fever/shock syndrome. Although recent studies estimated that the Indian subcontinent has the greatest burden of DENV infection and disease worldwide, we do not have reliable, population-based estimates of the incidence of infection and disease in this region. The goal of this study was to follow-up a cohort of 800 children living in a heavily urbanized area of Colombo, Sri Lanka to obtain accurate estimates of the incidence of DENV infection and disease. Annual blood samples were obtained from all children to estimate dengue seroprevalence at enrollment and to identify children exposed to new DENV infections during the study year. Blood was also obtained from any child in whom fever developed over the course of the study year to identify clinically apparent DENV infections. At enrollment, dengue seroprevalence was 53.07%, which indicated high transmission in this population. Over the study year, the incidence of DENV infection and disease were 8.39 (95% confidence interval = 6.56–10.53) and 3.38 (95% confidence interval = 2.24–4.88), respectively, per 100 children per year. The ratio of clinically inapparent to apparent infections was 1.48. These results will be useful for obtaining more accurate estimates of the burden of dengue in the region and for making decisions about testing and introduction of vaccines. PMID:24865684

  16. Vascular disease, ESRD, and death: interpreting competing risk analyses.

    PubMed

    Grams, Morgan E; Coresh, Josef; Segev, Dorry L; Kucirka, Lauren M; Tighiouart, Hocine; Sarnak, Mark J

    2012-10-01

    Vascular disease, a common condition in CKD, is a risk factor for mortality and ESRD. Optimal patient care requires accurate estimation and ordering of these competing risks. This is a prospective cohort study of screened (n=885) and randomized participants (n=837) in the Modification of Diet in Renal Disease study (original study enrollment, 1989-1992), evaluating the association of vascular disease with ESRD and pre-ESRD mortality using standard survival analysis and competing risk regression. The method of analysis resulted in markedly different estimates. Cumulative incidence by standard analysis (censoring at the competing event) implied that, with vascular disease, the 15-year incidence was 66% and 51% for ESRD and pre-ESRD death, respectively. A more accurate representation of absolute risk was estimated with competing risk regression: 15-year incidence was 54% and 29% for ESRD and pre-ESRD death, respectively. For the association of vascular disease with pre-ESRD death, estimates of relative risk by the two methods were similar (standard survival analysis adjusted hazard ratio, 1.63; 95% confidence interval, 1.20-2.20; competing risk regression adjusted subhazard ratio, 1.57; 95% confidence interval, 1.15-2.14). In contrast, the hazard and subhazard ratios differed substantially for other associations, such as GFR and pre-ESRD mortality. When competing events exist, absolute risk is better estimated using competing risk regression, but etiologic associations by this method must be carefully interpreted. The presence of vascular disease in CKD decreases the likelihood of survival to ESRD, independent of age and other risk factors.

  17. Estimating the Effect of School Water, Sanitation, and Hygiene Improvements on Pupil Health Outcomes.

    PubMed

    Garn, Joshua V; Brumback, Babette A; Drews-Botsch, Carolyn D; Lash, Timothy L; Kramer, Michael R; Freeman, Matthew C

    2016-09-01

    We conducted a cluster-randomized water, sanitation, and hygiene trial in 185 schools in Nyanza province, Kenya. The trial, however, had imperfect school-level adherence at many schools. The primary goal of this study was to estimate the causal effects of school-level adherence to interventions on pupil diarrhea and soil-transmitted helminth infection. Schools were divided into water availability groups, which were then randomized separately into either water, sanitation, and hygiene intervention arms or a control arm. School-level adherence to the intervention was defined by the number of intervention components-water, latrines, soap-that had been adequately implemented. The outcomes of interest were pupil diarrhea and soil-transmitted helminth infection. We used a weighted generalized structural nested model to calculate prevalence ratio. In the water-scarce group, there was evidence of a reduced prevalence of diarrhea among pupils attending schools that adhered to two or to three intervention components (prevalence ratio = 0.28, 95% confidence interval: 0.10, 0.75), compared with what the prevalence would have been had the same schools instead adhered to zero components or one. In the water-available group, there was no evidence of reduced diarrhea with better adherence. For the soil-transmitted helminth infection and intensity outcomes, we often observed point estimates in the preventive direction with increasing intervention adherence, but primarily among girls, and the confidence intervals were often very wide. Our instrumental variable point estimates sometimes suggested protective effects with increased water, sanitation, and hygiene intervention adherence, although many of the estimates were imprecise.

  18. Long-Term Geomagnetically Induced Current Observations From New Zealand: Peak Current Estimates for Extreme Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Rodger, Craig J.; Mac Manus, Daniel H.; Dalzell, Michael; Thomson, Alan W. P.; Clarke, Ellen; Petersen, Tanja; Clilverd, Mark A.; Divett, Tim

    2017-11-01

    Geomagnetically induced current (GIC) observations made in New Zealand over 14 years show induction effects associated with a rapidly varying horizontal magnetic field (dBH/dt) during geomagnetic storms. This study analyzes the GIC observations in order to estimate the impact of extreme storms as a hazard to the power system in New Zealand. Analysis is undertaken of GIC in transformer number six in Islington, Christchurch (ISL M6), which had the highest observed currents during the 6 November 2001 storm. Using previously published values of 3,000 nT/min as a representation of an extreme storm with 100 year return period, induced currents of 455 A were estimated for Islington (with the 95% confidence interval range being 155-605 A). For 200 year return periods using 5,000 nT/min, current estimates reach 755 A (confidence interval range 155-910 A). GIC measurements from the much shorter data set collected at transformer number 4 in Halfway Bush, Dunedin, (HWB T4), found induced currents to be consistently a factor of 3 higher than at Islington, suggesting equivalent extreme storm effects of 460-1,815 A (100 year return) and 460-2,720 A (200 year return). An estimate was undertaken of likely failure levels for single-phase transformers, such as HWB T4 when it failed during the 6 November 2001 geomagnetic storm, identifying that induced currents of 100 A can put such transformer types at risk of damage. Detailed modeling of the New Zealand power system is therefore required to put this regional analysis into a global context.

  19. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  20. Parents' willingness to pay for biologic treatments in juvenile idiopathic arthritis.

    PubMed

    Burnett, Heather F; Ungar, Wendy J; Regier, Dean A; Feldman, Brian M; Miller, Fiona A

    2014-12-01

    Biologic therapies are considered the standard of care for children with the most severe forms of juvenile idiopathic arthritis (JIA). Inconsistent and inadequate drug coverage, however, prevents many children from receiving timely and equitable access to the best treatment. The objective of this study was to evaluate parents' willingness to pay (WTP) for biologic and nonbiologic disease-modifying antirheumatic drugs (DMARDs) used to treat JIA. Utility weights from a discrete choice experiment were used to estimate the WTP for treatment characteristics including child-reported pain, participation in daily activities, side effects, days missed from school, drug treatment, and cost. Conditional logit regression was used to estimate utilities for each attribute level, and expected compensating variation was used to estimate the WTP. Bootstrapping was used to generate 95% confidence intervals for all WTP estimates. Parents had the highest marginal WTP for improved participation in daily activities and pain relief followed by the elimination of side effects of treatment. Parents were willing to pay $2080 (95% confidence interval $698-$4065) more for biologic DMARDs than for nonbiologic DMARDs if the biologic DMARD was more effective. Parents' WTP indicates their preference for treatments that reduce pain and improve daily functioning without side effects by estimating the monetary equivalent of utility for drug treatments in JIA. In addition to evidence of safety and efficacy, assessments of parents' preferences provide a broader perspective to decision makers by helping them understand the aspects of drug treatments in JIA that are most valued by families. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

Top