Sample records for values confidence intervals

  1. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    PubMed

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  2. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  3. Interpretation of Confidence Interval Facing the Conflict

    ERIC Educational Resources Information Center

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  4. Confidence interval or p-value?: part 4 of a series on evaluation of scientific publications.

    PubMed

    du Prel, Jean-Baptist; Hommel, Gerhard; Röhrig, Bernd; Blettner, Maria

    2009-05-01

    An understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts. The uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles. P-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.

  5. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  6. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    PubMed

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  7. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  8. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.

    PubMed

    Erdoğan, Semra; Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.

  9. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence

  10. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  11. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  12. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  13. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  14. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input

  15. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  16. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  17. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  18. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  19. Confidence Intervals from Realizations of Simulated Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, W.; Ratkiewicz, A.; Ressler, J. J.

    2017-09-28

    Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

  20. Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…

  1. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

    ERIC Educational Resources Information Center

    Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

    2012-01-01

    The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

  2. Bootstrapping Confidence Intervals for Robust Measures of Association.

    ERIC Educational Resources Information Center

    King, Jason E.

    A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…

  3. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  4. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    PubMed

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  5. Reducing the width of confidence intervals for the difference between two population means by inverting adaptive tests.

    PubMed

    O'Gorman, Thomas W

    2018-05-01

    In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.

  6. Confidence intervals from single observations in forest research

    Treesearch

    Harry T. Valentine; George M. Furnival; Timothy G. Gregoire

    1991-01-01

    A procedure for constructing confidence intervals and testing hypothese from a single trial or observation is reviewed. The procedure requires a prior, fixed estimate or guess of the outcome of an experiment or sampling. Two examples of applications are described: a confidence interval is constructed for the expected outcome of a systematic sampling of a forested tract...

  7. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in

  8. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    PubMed

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  9. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  10. Robust misinterpretation of confidence intervals.

    PubMed

    Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan

    2014-10-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

  11. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  12. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  13. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  14. Modified Confidence Intervals for the Mean of an Autoregressive Process.

    DTIC Science & Technology

    1985-08-01

    Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our

  15. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  16. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  18. Exact Scheffé-type confidence intervals for output from groundwater flow models: 2. Combined use of hydrogeologic information and calibration data

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.

  19. Using Confidence Intervals and Recurrence Intervals to Determine Precipitation Delivery Mechanisms Responsible for Mass Wasting Events.

    NASA Astrophysics Data System (ADS)

    Ulizio, T. P.; Bilbrey, C.; Stoyanoff, N.; Dixon, J. L.

    2015-12-01

    Mass wasting events are geologic hazards that impact human life and property across a variety of landscapes. These movements can be triggered by tectonic activity, anomalous precipitation events, or both; acting to decrease the factor of safety ratio on a hillslope to the point of failure. There exists an active hazard landscape in the West Boulder River drainage of Park Co., MT in which the mechanisms of slope failure are unknown. It is known that region has not seen significant tectonic activity within the last decade, leaving anomalous precipitation events as the likely trigger for slope failures in the landscape. Precipitation can be delivered to a landscape via rainfall or snow; it was the aim of this study to determine the precipitation delivery mechanism most likely responsible for movements in the West Boulder drainage following the Jungle Wildfire of 2006. Data was compiled from four SNOTEL sites in the surrounding area, spanning 33 years, focusing on, but not limited to; maximum snow water equivalent (SWE) values in a water year, median SWE values on the date which maximum SWE was recorded in a water year, the total precipitation accumulated in a water year, etc. Means were computed and 99% confidence intervals were constructed around these means. Recurrence intervals and exceedance probabilities were computed for maximum SWE values and total precipitation accumulated in a water year to determine water years with anomalous precipitation. It was determined that the water year 2010-2011 received an anomalously high amount of SWE, and snow melt in the spring of this water year likely triggered recent mass waste movements. This data is further supported by Google Earth imagery, showing movements between 2009 and 2011. Return intervals for the maximum SWE value in 2010-11 for the Placer Basin SNOTEL site was 34 years, while return intervals for the Box Canyon and Monument Peak SNOTEL sites were 17.5 and 17 years respectively. Max SWE values lie outside the

  20. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  1. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  2. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  3. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  4. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

  5. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  6. Spacecraft utility and the development of confidence intervals for criticality of anomalies

    NASA Technical Reports Server (NTRS)

    Williams, R. E.

    1980-01-01

    The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.

  7. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  8. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    ERIC Educational Resources Information Center

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  9. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  10. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  11. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models

  12. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    ERIC Educational Resources Information Center

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…

  13. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  14. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…

  15. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  16. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    NASA Astrophysics Data System (ADS)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  17. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  18. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…

  19. Using Asymptotic Results to Obtain a Confidence Interval for the Population Median

    ERIC Educational Resources Information Center

    Jamshidian, M.; Khatoonabadi, M.

    2007-01-01

    Almost all introductory and intermediate level statistics textbooks include the topic of confidence interval for the population mean. Almost all these texts introduce the median as a robust measure of central tendency. Only a few of these books, however, cover inference on the population median and in particular confidence interval for the median.…

  20. Another look at confidence intervals: Proposal for a more relevant and transparent approach

    NASA Astrophysics Data System (ADS)

    Biller, Steven D.; Oser, Scott M.

    2015-02-01

    The behaviors of various confidence/credible interval constructions are explored, particularly in the region of low event numbers where methods diverge most. We highlight a number of challenges, such as the treatment of nuisance parameters, and common misconceptions associated with such constructions. An informal survey of the literature suggests that confidence intervals are not always defined in relevant ways and are too often misinterpreted and/or misapplied. This can lead to seemingly paradoxical behaviors and flawed comparisons regarding the relevance of experimental results. We therefore conclude that there is a need for a more pragmatic strategy which recognizes that, while it is critical to objectively convey the information content of the data, there is also a strong desire to derive bounds on model parameter values and a natural instinct to interpret things this way. Accordingly, we attempt to put aside philosophical biases in favor of a practical view to propose a more transparent and self-consistent approach that better addresses these issues.

  1. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  2. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  3. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  4. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  5. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  6. Four Bootstrap Confidence Intervals for the Binomial-Error Model.

    ERIC Educational Resources Information Center

    Lin, Miao-Hsiang; Hsiung, Chao A.

    1992-01-01

    Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)

  7. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  8. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2012-01-01

    Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

  9. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  10. Trends in P Value, Confidence Interval, and Power Analysis Reporting in Health Professions Education Research Reports: A Systematic Appraisal.

    PubMed

    Abbott, Eduardo F; Serrano, Valentina P; Rethlefsen, Melissa L; Pandian, T K; Naik, Nimesh D; West, Colin P; Pankratz, V Shane; Cook, David A

    2018-02-01

    To characterize reporting of P values, confidence intervals (CIs), and statistical power in health professions education research (HPER) through manual and computerized analysis of published research reports. The authors searched PubMed, Embase, and CINAHL in May 2016, for comparative research studies. For manual analysis of abstracts and main texts, they randomly sampled 250 HPER reports published in 1985, 1995, 2005, and 2015, and 100 biomedical research reports published in 1985 and 2015. Automated computerized analysis of abstracts included all HPER reports published 1970-2015. In the 2015 HPER sample, P values were reported in 69/100 abstracts and 94 main texts. CIs were reported in 6 abstracts and 22 main texts. Most P values (≥77%) were ≤.05. Across all years, 60/164 two-group HPER studies had ≥80% power to detect a between-group difference of 0.5 standard deviations. From 1985 to 2015, the proportion of HPER abstracts reporting a CI did not change significantly (odds ratio [OR] 2.87; 95% CI 1.04, 7.88) whereas that of main texts reporting a CI increased (OR 1.96; 95% CI 1.39, 2.78). Comparison with biomedical studies revealed similar reporting of P values, but more frequent use of CIs in biomedicine. Automated analysis of 56,440 HPER abstracts found 14,867 (26.3%) reporting a P value, 3,024 (5.4%) reporting a CI, and increased reporting of P values and CIs from 1970 to 2015. P values are ubiquitous in HPER, CIs are rarely reported, and most studies are underpowered. Most reported P values would be considered statistically significant.

  11. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  12. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the

  13. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  14. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Robust Coefficients Alpha and Omega and Confidence Intervals With Outlying Observations and Missing Data

    PubMed Central

    Zhang, Zhiyong; Yuan, Ke-Hai

    2015-01-01

    Cronbach’s coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald’s omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega. PMID:29795870

  16. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  17. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

    PubMed

    Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

    1998-09-01

    To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

  18. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  19. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  20. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  1. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  2. A framework for interval-valued information system

    NASA Astrophysics Data System (ADS)

    Yin, Yunfei; Gong, Guanghong; Han, Liang

    2012-09-01

    Interval-valued information system is used to transform the conventional dataset into the interval-valued form. To conduct the interval-valued data mining, we conduct two investigations: (1) construct the interval-valued information system, and (2) conduct the interval-valued knowledge discovery. In constructing the interval-valued information system, we first make the paired attributes in the database discovered, and then, make them stored in the neighbour locations in a common database and regard them as 'one' new field. In conducting the interval-valued knowledge discovery, we utilise some related priori knowledge and regard the priori knowledge as the control objectives; and design an approximate closed-loop control mining system. On the implemented experimental platform (prototype), we conduct the corresponding experiments and compare the proposed algorithms with several typical algorithms, such as the Apriori algorithm, the FP-growth algorithm and the CLOSE+ algorithm. The experimental results show that the interval-valued information system method is more effective than the conventional algorithms in discovering interval-valued patterns.

  3. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  4. Robust Coefficients Alpha and Omega and Confidence Intervals With Outlying Observations and Missing Data: Methods and Software.

    PubMed

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-06-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.

  5. Calculating Confidence Intervals for Regional Economic Impacts of Recreastion by Bootstrapping Visitor Expenditures

    Treesearch

    Donald B.K. English

    2000-01-01

    In this paper I use bootstrap procedures to develop confidence intervals for estimates of total industrial output generated per thousand tourist visits. Mean expenditures from replicated visitor expenditure data included weights to correct for response bias. Impacts were estimated with IMPLAN. Ninety percent interval endpoints were 6 to 16 percent above or below the...

  6. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  7. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    PubMed

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Accuracy in parameter estimation for targeted effects in structural equation modeling: sample size planning for narrow confidence intervals.

    PubMed

    Lai, Keke; Kelley, Ken

    2011-06-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association

  9. Teach a Confidence Interval for the Median in the First Statistics Course

    ERIC Educational Resources Information Center

    Howington, Eric B.

    2017-01-01

    Few introductory statistics courses consider statistical inference for the median. This article argues in favour of adding a confidence interval for the median to the first statistics course. Several methods suitable for introductory statistics students are identified and briefly reviewed.

  10. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  12. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  13. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  14. CI2 for creating and comparing confidence-intervals for time-series bivariate plots.

    PubMed

    Mullineaux, David R

    2017-02-01

    Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Procedures for estimating confidence intervals for selected method performance parameters.

    PubMed

    McClure, F D; Lee, J K

    2001-01-01

    Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.

  16. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  17. Practical Advice on Calculating Confidence Intervals for Radioprotection Effects and Reducing Animal Numbers in Radiation Countermeasure Experiments

    PubMed Central

    Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin

    2014-01-01

    The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553

  18. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Characterizing the Mathematics Anxiety Literature Using Confidence Intervals as a Literature Review Mechanism

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce

    2010-01-01

    The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…

  20. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    ERIC Educational Resources Information Center

    Wiens, Stefan; Nilsson, Mats E.

    2017-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…

  1. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  2. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  3. Confidence intervals for the first crossing point of two hazard functions.

    PubMed

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  4. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    NASA Astrophysics Data System (ADS)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  5. Interval-Valued Rank in Finite Ordered Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff; Pogel, Alex; Purvine, Emilie

    We consider the concept of rank as a measure of the vertical levels and positions of elements of partially ordered sets (posets). We are motivated by the need for algorithmic measures on large, real-world hierarchically-structured data objects like the semantic hierarchies of ontolog- ical databases. These rarely satisfy the strong property of gradedness, which is required for traditional rank functions to exist. Representing such semantic hierarchies as finite, bounded posets, we recognize the duality of ordered structures to motivate rank functions which respect verticality both from the bottom and from the top. Our rank functions are thus interval-valued, and alwaysmore » exist, even for non-graded posets, providing order homomorphisms to an interval order on the interval-valued ranks. The concept of rank width arises naturally, allowing us to identify the poset region with point-valued width as its longest graded portion (which we call the “spindle”). A standard interval rank function is naturally motivated both in terms of its extremality and on pragmatic grounds. Its properties are examined, including the relation- ship to traditional grading and rank functions, and methods to assess comparisons of standard interval-valued ranks.« less

  6. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  7. Statistical variability and confidence intervals for planar dose QA pass rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  8. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  9. On Some Nonclassical Algebraic Properties of Interval-Valued Fuzzy Soft Sets

    PubMed Central

    2014-01-01

    Interval-valued fuzzy soft sets realize a hybrid soft computing model in a general framework. Both Molodtsov's soft sets and interval-valued fuzzy sets can be seen as special cases of interval-valued fuzzy soft sets. In this study, we first compare four different types of interval-valued fuzzy soft subsets and reveal the relations among them. Then we concentrate on investigating some nonclassical algebraic properties of interval-valued fuzzy soft sets under the soft product operations. We show that some fundamental algebraic properties including the commutative and associative laws do not hold in the conventional sense, but hold in weaker forms characterized in terms of the relation =L. We obtain a number of algebraic inequalities of interval-valued fuzzy soft sets characterized by interval-valued fuzzy soft inclusions. We also establish the weak idempotent law and the weak absorptive law of interval-valued fuzzy soft sets using interval-valued fuzzy soft J-equal relations. It is revealed that the soft product operations ∧ and ∨ of interval-valued fuzzy soft sets do not always have similar algebraic properties. Moreover, we find that only distributive inequalities described by the interval-valued fuzzy soft L-inclusions hold for interval-valued fuzzy soft sets. PMID:25143964

  10. On some nonclassical algebraic properties of interval-valued fuzzy soft sets.

    PubMed

    Liu, Xiaoyan; Feng, Feng; Zhang, Hui

    2014-01-01

    Interval-valued fuzzy soft sets realize a hybrid soft computing model in a general framework. Both Molodtsov's soft sets and interval-valued fuzzy sets can be seen as special cases of interval-valued fuzzy soft sets. In this study, we first compare four different types of interval-valued fuzzy soft subsets and reveal the relations among them. Then we concentrate on investigating some nonclassical algebraic properties of interval-valued fuzzy soft sets under the soft product operations. We show that some fundamental algebraic properties including the commutative and associative laws do not hold in the conventional sense, but hold in weaker forms characterized in terms of the relation = L . We obtain a number of algebraic inequalities of interval-valued fuzzy soft sets characterized by interval-valued fuzzy soft inclusions. We also establish the weak idempotent law and the weak absorptive law of interval-valued fuzzy soft sets using interval-valued fuzzy soft J-equal relations. It is revealed that the soft product operations ∧ and ∨ of interval-valued fuzzy soft sets do not always have similar algebraic properties. Moreover, we find that only distributive inequalities described by the interval-valued fuzzy soft L-inclusions hold for interval-valued fuzzy soft sets.

  11. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman

  12. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  13. Comprehension of confidence intervals - development and piloting of patient information materials for people with multiple sclerosis: qualitative study and pilot randomised controlled trial.

    PubMed

    Rahn, Anne C; Backhus, Imke; Fuest, Franz; Riemann-Lorenz, Karin; Köpke, Sascha; van de Roemer, Adrianus; Mühlhauser, Ingrid; Heesen, Christoph

    2016-09-20

    Presentation of confidence intervals alongside information about treatment effects can support informed treatment choices in people with multiple sclerosis. We aimed to develop and pilot-test different written patient information materials explaining confidence intervals in people with relapsing-remitting multiple sclerosis. Further, a questionnaire on comprehension of confidence intervals was developed and piloted. We developed different patient information versions aiming to explain confidence intervals. We used an illustrative example to test three different approaches: (1) short version, (2) "average weight" version and (3) "worm prophylaxis" version. Interviews were conducted using think-aloud and teach-back approaches to test feasibility and analysed using qualitative content analysis. To assess comprehension of confidence intervals, a six-item multiple choice questionnaire was developed and tested in a pilot randomised controlled trial using the online survey software UNIPARK. Here, the average weight version (intervention group) was tested against a standard patient information version on confidence intervals (control group). People with multiple sclerosis were invited to take part using existing mailing-lists of people with multiple sclerosis in Germany and were randomised using the UNIPARK algorithm. Participants were blinded towards group allocation. Primary endpoint was comprehension of confidence intervals, assessed with the six-item multiple choice questionnaire with six points representing perfect knowledge. Feasibility of the patient information versions was tested with 16 people with multiple sclerosis. For the pilot randomised controlled trial, 64 people with multiple sclerosis were randomised (intervention group: n = 36; control group: n = 28). More questions were answered correctly in the intervention group compared to the control group (mean 4.8 vs 3.8, mean difference 1.1 (95 % CI 0.42-1.69), p = 0.002). The questionnaire

  14. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2007-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  15. Weighted regression analysis and interval estimators

    Treesearch

    Donald W. Seegrist

    1974-01-01

    A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.

  16. Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models

    PubMed Central

    2014-01-01

    Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829

  17. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  18. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing.

    PubMed

    Oono, Ryoko

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.

  19. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    PubMed Central

    Wiens, Stefan; Nilsson, Mats E.

    2016-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes? PMID:29805179

  20. Evaluating the utility of hexapod species for calculating a confidence interval about a succession based postmortem interval estimate.

    PubMed

    Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D

    2014-08-01

    Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland

  1. The confidence-accuracy relationship for eyewitness identification decisions: Effects of exposure duration, retention interval, and divided attention.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-03-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.

  2. A confidence interval analysis of sampling effort, sequencing depth, and taxonomic resolution of fungal community ecology in the era of high-throughput sequencing

    PubMed Central

    2017-01-01

    High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889

  3. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  4. Knowledge level of effect size statistics, confidence intervals and meta-analysis in Spanish academic psychologists.

    PubMed

    Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor

    2016-11-01

    The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.

  5. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  6. ’Exact’ Two-Sided Confidence Intervals on Nonnegative Linear Combinations of Variances.

    DTIC Science & Technology

    1980-07-01

    Colorado State University ( 042_402) II. CONTrOLLING OFFICE NAME AND ADDRESS It. REPORT OAT Office of Naval Rsearch -// 1 Jul MjW80 Statistics and...MONNEGATIVE LINEAR COMBINATIONS OF VARIANCES by Franklin A. Graybill Colorado State University and Chih-Ming Wang SPSS Inc. 1. Introduction In a paper to soon...1 + a2’ called the Nodf Led Lace Sample (HLS) confidence interval, is in 2. Aoce-3Ion For DDC TAO u*.- *- -. n c edI Ju.-’I if iction_, i !~BV . . I

  7. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  8. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  9. Applications of asymptotic confidence intervals with continuity corrections for asymmetric comparisons in noninferiority trials.

    PubMed

    Soulakova, Julia N; Bright, Brianna C

    2013-01-01

    A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Stochastic satisficing account of confidence in uncertain value-based decisions

    PubMed Central

    Bahrami, Bahador; Keramati, Mehdi

    2018-01-01

    Every day we make choices under uncertainty; choosing what route to work or which queue in a supermarket to take, for example. It is unclear how outcome variance, e.g. uncertainty about waiting time in a queue, affects decisions and confidence when outcome is stochastic and continuous. How does one evaluate and choose between an option with unreliable but high expected reward, and an option with more certain but lower expected reward? Here we used an experimental design where two choices’ payoffs took continuous values, to examine the effect of outcome variance on decision and confidence. We found that our participants’ probability of choosing the good (high expected reward) option decreased when the good or the bad options’ payoffs were more variable. Their confidence ratings were affected by outcome variability, but only when choosing the good option. Unlike perceptual detection tasks, confidence ratings correlated only weakly with decisions’ time, but correlated with the consistency of trial-by-trial choices. Inspired by the satisficing heuristic, we propose a “stochastic satisficing” (SSAT) model for evaluating options with continuous uncertain outcomes. In this model, options are evaluated by their probability of exceeding an acceptability threshold, and confidence reports scale with the chosen option’s thus-defined satisficing probability. Participants’ decisions were best explained by an expected reward model, while the SSAT model provided the best prediction of decision confidence. We further tested and verified the predictions of this model in a second experiment. Our model and experimental results generalize the models of metacognition from perceptual detection tasks to continuous-value based decisions. Finally, we discuss how the stochastic satisficing account of decision confidence serves psychological and social purposes associated with the evaluation, communication and justification of decision-making. PMID:29621325

  11. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    ERIC Educational Resources Information Center

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  12. Multicriteria Decision-Making Approach with Hesitant Interval-Valued Intuitionistic Fuzzy Sets

    PubMed Central

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Chen, Xiao-hong

    2014-01-01

    The definition of hesitant interval-valued intuitionistic fuzzy sets (HIVIFSs) is developed based on interval-valued intuitionistic fuzzy sets (IVIFSs) and hesitant fuzzy sets (HFSs). Then, some operations on HIVIFSs are introduced in detail, and their properties are further discussed. In addition, some hesitant interval-valued intuitionistic fuzzy number aggregation operators based on t-conorms and t-norms are proposed, which can be used to aggregate decision-makers' information in multicriteria decision-making (MCDM) problems. Some valuable proposals of these operators are studied. In particular, based on algebraic and Einstein t-conorms and t-norms, some hesitant interval-valued intuitionistic fuzzy algebraic aggregation operators and Einstein aggregation operators can be obtained, respectively. Furthermore, an approach of MCDM problems based on the proposed aggregation operators is given using hesitant interval-valued intuitionistic fuzzy information. Finally, an illustrative example is provided to demonstrate the applicability and effectiveness of the developed approach, and the study is supported by a sensitivity analysis and a comparison analysis. PMID:24983009

  13. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    PubMed

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  14. Self-confidence and affect responses to short-term sprint interval training.

    PubMed

    Selmi, Walid; Rebai, Haithem; Chtara, Mokhtar; Naceur, Abdelmajid; Sahli, Sonia

    2018-05-01

    The study aimed to investigate the effects of repeated sprint (RS) training on somatic anxiety (SA), cognitive anxiety (CA), self-confidence (SC), rating of perceived exertion (RPE) and repeated sprint ability (RSA) indicators in elite young soccer players. Thirty elite soccer players in the first football league (age: 17.8±0.9years) volunteered to participate in this study. They were randomly assigned to one of two groups: a repeated sprint training group (RST-G; n=15) and a control group (CON-G; n=15). RST-G participated in 6weeks of intensive training based on RS (6×(20+20m) runs, with 20s passive recovery interval between sprints, 3 times/week). Before and after the 6-week intervention, all participants performed a RSA test and completed a Competitive Scale Anxiety Inventory (CSAI-2) and the RPE. After training RST-G showed a very significant (p<0.000) increase in RSA total time performance relative to controls. Despite the faster sprint pace, the RPE also decreased significantly (p<0.005) in RST-G, and their self confidence was significantly greater (p<0.01), while the cognitive (p<0.01) and somatic (p<0.000) components of their anxiety state decreased. When practiced regularly, short bouts of sprint exercises improve anaerobic performance associated with a reduction in anxiety state and an increase in SC which may probably boost competitive performance. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-06-27

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.

  16. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  17. New Approaches to Robust Confidence Intervals for Location: A Simulation Study.

    DTIC Science & Technology

    1984-06-01

    obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined

  18. Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.

    PubMed

    De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C

    2017-06-21

    How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.

  19. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    PubMed

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  20. Overconfidence in Interval Estimates: What Does Expertise Buy You?

    ERIC Educational Resources Information Center

    McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan

    2008-01-01

    People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…

  1. Using confidence intervals to evaluate the focus alignment of spectrograph detector arrays.

    PubMed

    Sawyer, Travis W; Hawkins, Kyle S; Damento, Michael

    2017-06-20

    High-resolution spectrographs extract detailed spectral information of a sample and are frequently used in astronomy, laser-induced breakdown spectroscopy, and Raman spectroscopy. These instruments employ dispersive elements such as prisms and diffraction gratings to spatially separate different wavelengths of light, which are then detected by a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) detector array. Precise alignment along the optical axis (focus position) of the detector array is critical to maximize the instrumental resolution; however, traditional approaches of scanning the detector through focus lack a quantitative measure of precision, limiting the repeatability and relying on one's experience. Here we propose a method to evaluate the focus alignment of spectrograph detector arrays by establishing confidence intervals to measure the alignment precision. We show that propagation of uncertainty can be used to estimate the variance in an alignment, thus providing a quantitative and repeatable means to evaluate the precision and confidence of an alignment. We test the approach by aligning the detector array of a prototype miniature echelle spectrograph. The results indicate that the procedure effectively quantifies alignment precision, enabling one to objectively determine when an alignment has reached an acceptable level. This quantitative approach also provides a foundation for further optimization, including automated alignment. Furthermore, the procedure introduced here can be extended to other alignment techniques that rely on numerically fitting data to a model, providing a general framework for evaluating the precision of alignment methods.

  2. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  3. Exact intervals and tests for median when one sample value possibly an outliner

    NASA Technical Reports Server (NTRS)

    Keller, G. J.; Walsh, J. E.

    1973-01-01

    Available are independent observations (continuous data) that are believed to be a random sample. Desired are distribution-free confidence intervals and significance tests for the population median. However, there is the possibility that either the smallest or the largest observation is an outlier. Then, use of a procedure for rejection of an outlying observation might seem appropriate. Such a procedure would consider that two alternative situations are possible and would select one of them. Either (1) the n observations are truly a random sample, or (2) an outlier exists and its removal leaves a random sample of size n-1. For either situation, confidence intervals and tests are desired for the median of the population yielding the random sample. Unfortunately, satisfactory rejection procedures of a distribution-free nature do not seem to be available. Moreover, all rejection procedures impose undesirable conditional effects on the observations, and also, can select the wrong one of the two above situations. It is found that two-sided intervals and tests based on two symmetrically located order statistics (not the largest and smallest) of the n observations have this property.

  4. Proportion of general factor variance in a hierarchical multiple-component measuring instrument: a note on a confidence interval estimation procedure.

    PubMed

    Raykov, Tenko; Zinbarg, Richard E

    2011-05-01

    A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.

  5. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  6. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    PubMed

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  7. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  8. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.

  9. Confidence limit calculation for antidotal potency ratio derived from lethal dose 50

    PubMed Central

    Manage, Ananda; Petrikovics, Ilona

    2013-01-01

    AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618

  10. Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning

    NASA Technical Reports Server (NTRS)

    Fayyad, U.; Irani, K.

    1993-01-01

    Since most real-world applications of classification learning involve continuous-valued attributes, properly addressing the discretization process is an important problem. This paper addresses the use of the entropy minimization heuristic for discretizing the range of a continuous-valued attribute into multiple intervals.

  11. Quantum interval-valued probability: Contextuality and the Born rule

    NASA Astrophysics Data System (ADS)

    Tai, Yu-Tsung; Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr

    2018-05-01

    We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.

  12. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  13. Evaluation about the Performance of E-Government Based on Interval-Valued Intuitionistic Fuzzy Set

    PubMed Central

    Zhang, Shuai; Wang, Yan

    2014-01-01

    The evaluation is an important approach to promote the development of the E-Government. Since the rapid development of E-Government in the world, the E-Government performance evaluation has become a hot issue in the academia. In this paper, we develop a new evaluation method for the development of the E-Government based on the interval-valued intuitionistic fuzzy set which is a powerful technique in expressing the uncertainty of the real situation. First, we extend the geometric Heronian mean (GHM) operator to interval-valued intuitionistic fuzzy environment and proposed the interval-valued intuitionistic fuzzy GHM (IIFGHM) operator. Then, we investigate the relationships between the IIFGHM operator and some existing ones, such as generalized interval-valued intuitionistic fuzzy HM (GIIFHM) and interval-valued intuitionistic fuzzy weighted Bonferoni mean operator. Furthermore, we validate the effectiveness of the proposed method using a real case about the E-Government evaluation in Hangzhou City, China. PMID:24707196

  14. Evaluation about the performance of E-government based on interval-valued intuitionistic fuzzy set.

    PubMed

    Zhang, Shuai; Yu, Dejian; Wang, Yan; Zhang, Wenyu

    2014-01-01

    The evaluation is an important approach to promote the development of the E-Government. Since the rapid development of E-Government in the world, the E-Government performance evaluation has become a hot issue in the academia. In this paper, we develop a new evaluation method for the development of the E-Government based on the interval-valued intuitionistic fuzzy set which is a powerful technique in expressing the uncertainty of the real situation. First, we extend the geometric Heronian mean (GHM) operator to interval-valued intuitionistic fuzzy environment and proposed the interval-valued intuitionistic fuzzy GHM (IIFGHM) operator. Then, we investigate the relationships between the IIFGHM operator and some existing ones, such as generalized interval-valued intuitionistic fuzzy HM (GIIFHM) and interval-valued intuitionistic fuzzy weighted Bonferoni mean operator. Furthermore, we validate the effectiveness of the proposed method using a real case about the E-Government evaluation in Hangzhou City, China.

  15. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  16. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  17. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  18. Health significance and statistical uncertainty. The value of P-value.

    PubMed

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P<0.05" (defined as "statistically significant") and "P>0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  19. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    PubMed

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  20. A mathematical framework to quantify the masking effect associated with the confidence intervals of measures of disproportionality

    PubMed Central

    Maignen, François; Hauben, Manfred; Dogné, Jean-Michel

    2017-01-01

    Background: The lower bound of the 95% confidence interval of measures of disproportionality (Lower95CI) is widely used in signal detection. Masking is a statistical issue by which true signals of disproportionate reporting are hidden by the presence of other medicines. The primary objective of our study is to develop and validate a mathematical framework for assessing the masking effect of Lower95CI. Methods: We have developed our new algorithm based on the masking ratio (MR) developed for the measures of disproportionality. A MR for the Lower95CI (MRCI) is proposed. A simulation study to validate this algorithm was also conducted. Results: We have established the existence of a very close mathematical relation between MR and MRCI. For a given drug–event pair, the same product will be responsible for the highest masking effect with the measure of disproportionality and its Lower95CI. The extent of masking is likely to be very similar across the two methods. An important proportion of identical drug–event associations affected by the presence of an important masking effect is revealed by the unmasking exercise, whether the proportional reporting ratio (PRR) or its confidence interval are used. Conclusion: The detection of the masking effect of Lower95CI can be automated. The real benefits of this unmasking in terms of new true-positive signals (rate of true-positive/false-positive) or time gained by the revealing of signals using this method have not been fully assessed. These benefits should be demonstrated in the context of prospective studies. PMID:28845231

  1. Dominance-based ranking functions for interval-valued intuitionistic fuzzy sets.

    PubMed

    Chen, Liang-Hsuan; Tu, Chien-Cheng

    2014-08-01

    The ranking of interval-valued intuitionistic fuzzy sets (IvIFSs) is difficult since they include the interval values of membership and nonmembership. This paper proposes ranking functions for IvIFSs based on the dominance concept. The proposed ranking functions consider the degree to which an IvIFS dominates and is not dominated by other IvIFSs. Based on the bivariate framework and the dominance concept, the functions incorporate not only the boundary values of membership and nonmembership, but also the relative relations among IvIFSs in comparisons. The dominance-based ranking functions include bipolar evaluations with a parameter that allows the decision-maker to reflect his actual attitude in allocating the various kinds of dominance. The relationship for two IvIFSs that satisfy the dual couple is defined based on four proposed ranking functions. Importantly, the proposed ranking functions can achieve a full ranking for all IvIFSs. Two examples are used to demonstrate the applicability and distinctiveness of the proposed ranking functions.

  2. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  3. Uncertainty in Population Growth Rates: Determining Confidence Intervals from Point Estimates of Parameters

    PubMed Central

    Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.

    2010-01-01

    Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049

  4. Professional values, self-esteem, and ethical confidence of baccalaureate nursing students.

    PubMed

    Iacobucci, Trisha A; Daly, Barbara J; Lindell, Debbie; Griffin, Mary Quinn

    2013-06-01

    Professional identity and competent ethical behaviors of nursing students are commonly developed through curricular inclusion of professional nursing values education. Despite the enactment of this approach, nursing students continue to express difficulty in managing ethical conflicts encountered in their practice. This descriptive correlational study explores the relationships between professional nursing values, self-esteem, and ethical decision making among senior baccalaureate nursing students. A convenience sample of 47 senior nursing students from the United States were surveyed for their level of internalized professional nursing values (Revised Professional Nursing Values Scale), level of self-esteem (Rosenberg's Self-Esteem Scale), and perceived level of confidence in ethical decision making. A significant positive relationship (p < 0.05) was found between nursing students' professional nursing values and levels of self-esteem. The results of this study can be useful to nursing educators whose efforts are focused on promoting professional identity development and competent ethical behaviors of future nurses.

  5. Sensitivity Analysis of Multicriteria Choice to Changes in Intervals of Value Tradeoffs

    NASA Astrophysics Data System (ADS)

    Podinovski, V. V.

    2018-03-01

    An approach to sensitivity (stability) analysis of nondominated alternatives to changes in the bounds of intervals of value tradeoffs, where the alternatives are selected based on interval data of criteria tradeoffs is proposed. Methods of computations for the analysis of sensitivity of individual nondominated alternatives and the set of such alternatives as a whole are developed.

  6. Evaluating the Impact of Guessing and Its Interactions With Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    PubMed Central

    Paek, Insu

    2015-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test characteristics on four confidence interval (CI) procedures for coefficient alpha in terms of coverage rate (CR), length, and the degree of asymmetry of CI estimates. In addition, interval estimates of coefficient alpha when data follow the essentially tau-equivalent condition were investigated as a supplement to the case of dichotomous data with examinee guessing. For dichotomous data with guessing, the results did not reveal salient negative effects of guessing and its interactions with other test characteristics (sample size, test length, coefficient alpha levels) on CR and the degree of asymmetry, but the effect of guessing was salient as a main effect and an interaction effect with sample size on the length of the CI estimates, making longer CI estimates as guessing increases, especially when combined with a small sample size. Other important effects (e.g., CI procedures on CR) are also discussed. PMID:29795863

  7. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    PubMed

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Confidence intervals for differences between volumes under receiver operating characteristic surfaces (VUS) and generalized Youden indices (GYIs).

    PubMed

    Yin, Jingjing; Nakas, Christos T; Tian, Lili; Reiser, Benjamin

    2018-03-01

    This article explores both existing and new methods for the construction of confidence intervals for differences of indices of diagnostic accuracy of competing pairs of biomarkers in three-class classification problems and fills the methodological gaps for both parametric and non-parametric approaches in the receiver operating characteristic surface framework. The most widely used such indices are the volume under the receiver operating characteristic surface and the generalized Youden index. We describe implementation of all methods and offer insight regarding the appropriateness of their use through a large simulation study with different distributional and sample size scenarios. Methods are illustrated using data from the Alzheimer's Disease Neuroimaging Initiative study, where assessment of cognitive function naturally results in a three-class classification setting.

  9. Study of FibroTest and hyaluronic acid biological variation in healthy volunteers and comparison of serum hyaluronic acid biological variation between chronic liver diseases of different etiology and fibrotic stage using confidence intervals.

    PubMed

    Istaces, Nicolas; Gulbis, Béatrice

    2015-07-01

    Personalized ranges of liver fibrosis serum biomarkers such as FibroTest or hyaluronic acid could be used for early detection of fibrotic changes in patients with progressive chronic liver disease. Our aim was to generate reliable biological variation estimates for these two biomarkers with confidence intervals for within-subject biological variation and reference change value. Nine fasting healthy volunteers and 66 chronic liver disease patients were included. Biological variation estimates were calculated for FibroTest in healthy volunteers, and for hyaluronic acid in healthy volunteers and chronic liver disease patients stratified by etiology and liver fibrosis stage. In healthy volunteers, within-subject biological coefficient of variation (with 95% confidence intervals) and index of individuality were 20% (16%-28%) and 0.6 for FibroTest and 34% (27%-47%) and 0.79 for hyaluronic acid, respectively. Overall hyaluronic acid within-subject biological coefficient of variation was similar among non-alcoholic fatty liver disease and chronic hepatitis C with 41% (34%-52%) and 45% (39%-55%), respectively, in contrast to chronic hepatitis B with 170% (140%-215%). Hyaluronic acid within-subject biological coefficients of variation were similar between F0-F1, F2 and F3 liver fibrosis stages in non-alcoholic fatty liver disease with 34% (25%-49%), 41% (31%-59%) and 34% (23%-62%), respectively, and in chronic hepatitis C with 34% (27%-47%), 33% (26%-45%) and 38% (27%-65%), respectively. However, corresponding hyaluronic acid indexes of individuality were lower in the higher fibrosis stages. Non-overlapping confidence intervals of biological variation estimates allowed us to detect significant differences regarding hyaluronic acid biological variation between chronic liver disease subgroups. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  10. The Use of One-Sample Prediction Intervals for Estimating CO2 Scrubber Canister Durations

    DTIC Science & Technology

    2012-10-01

    Grade and 812 D-Grade Sofnolime.3 Definitions According to Devore,4 A CI (confidence interval) refers to a parameter, or population ... characteristic , whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this

  11. Neutron multiplicity counting: Confidence intervals for reconstruction parameters

    DOE PAGES

    Verbeke, Jerome M.

    2016-03-09

    From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less

  12. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  13. ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).

    PubMed

    Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew

    2016-09-01

    This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.

  14. The prognostic value of standardized reference values for speckle-tracking global longitudinal strain in hypertrophic cardiomyopathy.

    PubMed

    Hartlage, Gregory R; Kim, Jonathan H; Strickland, Patrick T; Cheng, Alan C; Ghasemzadeh, Nima; Pernetz, Maria A; Clements, Stephen D; Williams, B Robinson

    2015-03-01

    Speckle-tracking left ventricular global longitudinal strain (GLS) assessment may provide substantial prognostic information for hypertrophic cardiomyopathy (HCM) patients. Reference values for GLS have been recently published. We aimed to evaluate the prognostic value of standardized reference values for GLS in HCM patients. An analysis of HCM clinic patients who underwent GLS was performed. GLS was defined as normal (more negative or equal to -16%) and abnormal (less negative than -16%) based on recently published reference values. Patients were followed for a composite of events including heart failure hospitalization, sustained ventricular arrhythmia, and all-cause death. The power of GLS to predict outcomes was assessed relative to traditional clinical and echocardiographic variables present in HCM. 79 HCM patients were followed for a median of 22 months (interquartile range 9-30 months) after imaging. During follow-up, 15 patients (19%) met the primary outcome. Abnormal GLS was the only echocardiographic variable independently predictive of the primary outcome [multivariate Hazard ratio 5.05 (95% confidence interval 1.09-23.4, p = 0.038)]. When combined with traditional clinical variables, abnormal GLS remained independently predictive of the primary outcome [multivariate Hazard ratio 5.31 (95 % confidence interval 1.18-24, p = 0.030)]. In a model including the strongest clinical and echocardiographic predictors of the primary outcome, abnormal GLS demonstrated significant incremental benefit for risk stratification [net reclassification improvement 0.75 (95 % confidence interval 0.21-1.23, p < 0.0001)]. Abnormal GLS is an independent predictor of adverse outcomes in HCM patients. Standardized use of GLS may provide significant incremental value over traditional variables for risk stratification.

  15. Overuse of short-interval bone densitometry: assessing rates of low-value care.

    PubMed

    Morden, N E; Schpero, W L; Zaha, R; Sequist, T D; Colla, C H

    2014-09-01

    We evaluated the prevalence and geographic variation of short-interval (repeated in under 2 years) dual-energy X-ray absorptiometry tests (DXAs) among Medicare beneficiaries. Short-interval DXA use varied across regions (coefficient of variation = 0.64), and unlike other DXAs, rates decreased with payment cuts. The American College of Rheumatology, through the Choosing Wisely initiative, identified measuring bone density more often than every 2 years as care "physicians and patients should question." We measured the prevalence and described the geographic variation of short-interval (repeated in under 2 years) DXAs among Medicare beneficiaries and estimated the cost of this testing and its responsiveness to payment change. Using 100 % Medicare claims data, 2006-2011, we identified DXAs and short-interval DXAs for female Medicare beneficiaries over age 66. We determined the population rate of DXAs and short-interval DXAs, as well as Medicare spending on short-interval DXAs, nationally and by hospital referral region (HRR). DXA use was stable 2008-2011 (12.4 to 11.5 DXAs per 100 women). DXA use varied across HRRs: in 2011, overall DXA use ranged from 6.3 to 23.0 per 100 women (coefficient of variation = 0.18), and short-interval DXAs ranged from 0.3 to 8.0 per 100 women (coefficient of variation = 0.64). Short-interval DXA use fluctuated substantially with payment changes; other DXAs did not. Short-interval DXAs, which represented 10.1 % of all DXAs, cost Medicare approximately US$16 million in 2011. One out of ten DXAs was administered in a time frame shorter than recommended and at a substantial cost to Medicare. DXA use varied across regions. Short-interval DXA use was responsive to reimbursement changes, suggesting carefully designed policy and payment reform may reduce this care identified by rheumatologists as low value.

  16. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  17. Performance analysis of complex repairable industrial systems using PSO and fuzzy confidence interval based methodology.

    PubMed

    Garg, Harish

    2013-03-01

    The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival.

    PubMed

    Ishwaran, Hemant; Lu, Min

    2018-06-04

    Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Confidence Intervals for Laboratory Sonic Boom Annoyance Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Commercial supersonic flight is currently forbidden over land because sonic booms have historically caused unacceptable annoyance levels in overflown communities. NASA is providing data and expertise to noise regulators as they consider relaxing the ban for future quiet supersonic aircraft. One deliverable NASA will provide is a predictive model for indoor annoyance to aid in setting an acceptable quiet sonic boom threshold. A laboratory study was conducted to determine how indoor vibrations caused by sonic booms affect annoyance judgments. The test method required finding the point of subjective equality (PSE) between sonic boom signals that cause vibrations and signals not causing vibrations played at various amplitudes. This presentation focuses on a few statistical techniques for estimating the interval around the PSE. The techniques examined are the Delta Method, Parametric and Nonparametric Bootstrapping, and Bayesian Posterior Estimation.

  20. The Logic of Summative Confidence

    ERIC Educational Resources Information Center

    Gugiu, P. Cristian

    2007-01-01

    The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…

  1. Confidence in Altman-Bland plots: a critical review of the method of differences.

    PubMed

    Ludbrook, John

    2010-02-01

    1. Altman and Bland argue that the virtue of plotting differences against averages in method-comparison studies is that 95% confidence limits for the differences can be constructed. These allow authors and readers to judge whether one method of measurement could be substituted for another. 2. The technique is often misused. So I have set out, by statistical argument and worked examples, to advise pharmacologists and physiologists how best to construct these limits. 3. First, construct a scattergram of differences on averages, then calculate the line of best fit for the linear regression of differences on averages. If the slope of the regression is shown to differ from zero, there is proportional bias. 4. If there is no proportional bias and if the scatter of differences is uniform (homoscedasticity), construct 'classical' 95% confidence limits. 5. If there is proportional bias yet homoscedasticity, construct hyperbolic 95% confidence limits (prediction interval) around the line of best fit. 6. If there is proportional bias and the scatter of values for differences increases progressively as the average values increase (heteroscedasticity), log-transform the raw values from the two methods and replot differences against averages. If this eliminates proportional bias and heteroscedasticity, construct 'classical' 95% confidence limits. Otherwise, construct horizontal V-shaped 95% confidence limits around the line of best fit of differences on averages or around the weighted least products line of best fit to the original data. 7. In designing a method-comparison study, consult a qualified biostatistician, obey the rules of randomization and make replicate observations.

  2. Razonamiento de Estudiantes Universitarios sobre Variabilidad e Intervalos de Confianza en un Contexto Inferencial Informal = University Students' Reasoning on Variability and Confidence Intervals in Inferential Informal Context

    ERIC Educational Resources Information Center

    Inzunsa Cazares, Santiago

    2016-01-01

    This article presents the results of a qualitative research with a group of 15 university students of social sciences on informal inferential reasoning developed in a computer environment on concepts involved in the confidence intervals. The results indicate that students developed a correct reasoning about sampling variability and visualized…

  3. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  4. Hypothesis Testing, "p" Values, Confidence Intervals, Measures of Effect Size, and Bayesian Methods in Light of Modern Robust Techniques

    ERIC Educational Resources Information Center

    Wilcox, Rand R.; Serang, Sarfaraz

    2017-01-01

    The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…

  5. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the

  6. Continuous hesitant fuzzy aggregation operators and their application to decision making under interval-valued hesitant fuzzy setting.

    PubMed

    Peng, Ding-Hong; Wang, Tie-Dan; Gao, Chang-Yuan; Wang, Hua

    2014-01-01

    Interval-valued hesitant fuzzy set (IVHFS), which is the further generalization of hesitant fuzzy set, can overcome the barrier that the precise membership degrees are sometimes hard to be specified and permit the membership degrees of an element to a set to have a few different interval values. To efficiently and effectively aggregate the interval-valued hesitant fuzzy information, in this paper, we investigate the continuous hesitant fuzzy aggregation operators with the aid of continuous OWA operator; the C-HFOWA operator and C-HFOWG operator are presented and their essential properties are studied in detail. Then, we extend the C-HFOW operators to aggregate multiple interval-valued hesitant fuzzy elements and then develop the weighted C-HFOW (WC-HFOWA and WC-HFOWG) operators, the ordered weighted C-HFOW (OWC-HFOWA and OWC-HFOWG) operators, and the synergetic weighted C-HFOWA (SWC-HFOWA and SWC-HFOWG) operators; some properties are also discussed to support them. Furthermore, a SWC-HFOW operators-based approach for multicriteria decision making problem is developed. Finally, a practical example involving the evaluation of service quality of high-tech enterprises is carried out and some comparative analyses are performed to demonstrate the applicability and effectiveness of the developed approaches.

  7. Continuous Hesitant Fuzzy Aggregation Operators and Their Application to Decision Making under Interval-Valued Hesitant Fuzzy Setting

    PubMed Central

    Wang, Tie-Dan; Gao, Chang-Yuan; Wang, Hua

    2014-01-01

    Interval-valued hesitant fuzzy set (IVHFS), which is the further generalization of hesitant fuzzy set, can overcome the barrier that the precise membership degrees are sometimes hard to be specified and permit the membership degrees of an element to a set to have a few different interval values. To efficiently and effectively aggregate the interval-valued hesitant fuzzy information, in this paper, we investigate the continuous hesitant fuzzy aggregation operators with the aid of continuous OWA operator; the C-HFOWA operator and C-HFOWG operator are presented and their essential properties are studied in detail. Then, we extend the C-HFOW operators to aggregate multiple interval-valued hesitant fuzzy elements and then develop the weighted C-HFOW (WC-HFOWA and WC-HFOWG) operators, the ordered weighted C-HFOW (OWC-HFOWA and OWC-HFOWG) operators, and the synergetic weighted C-HFOWA (SWC-HFOWA and SWC-HFOWG) operators; some properties are also discussed to support them. Furthermore, a SWC-HFOW operators-based approach for multicriteria decision making problem is developed. Finally, a practical example involving the evaluation of service quality of high-tech enterprises is carried out and some comparative analyses are performed to demonstrate the applicability and effectiveness of the developed approaches. PMID:24987747

  8. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  9. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  10. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

    PubMed

    Chakraborty, Hrishikesh; Hossain, Akhtar

    2018-03-01

    The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Discussion on calculation of disease severity index values from scales with unequal intervals

    USDA-ARS?s Scientific Manuscript database

    When estimating severity of disease, a disease interval (or category) scale comprises a number of categories of known numeric values – with plant disease this is generally the percent area with symptoms (e.g., the Horsfall-Barratt (H-B) scale). Studies in plant pathology and plant breeding often use...

  12. PGA/MOEAD: a preference-guided evolutionary algorithm for multi-objective decision-making problems with interval-valued fuzzy preferences

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Lin, Lin; Zhong, ShiSheng

    2018-02-01

    In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.

  13. Confidence Intervals for Squared Semipartial Correlation Coefficients: The Effect of Nonnormality

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2010-01-01

    The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…

  14. Identifying the bad guy in a lineup using confidence judgments under deadline pressure.

    PubMed

    Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen

    2012-10-01

    Eyewitness-identification tests often culminate in witnesses not picking the culprit or identifying innocent suspects. We tested a radical alternative to the traditional lineup procedure used in such tests. Rather than making a positive identification, witnesses made confidence judgments under a short deadline about whether each lineup member was the culprit. We compared this deadline procedure with the traditional sequential-lineup procedure in three experiments with retention intervals ranging from 5 min to 1 week. A classification algorithm that identified confidence criteria that optimally discriminated accurate from inaccurate decisions revealed that decision accuracy was 24% to 66% higher under the deadline procedure than under the traditional procedure. Confidence profiles across lineup stimuli were more informative than were identification decisions about the likelihood that an individual witness recognized the culprit or correctly recognized that the culprit was not present. Large differences between the maximum and the next-highest confidence value signaled very high accuracy. Future support for this procedure across varied conditions would highlight a viable alternative to the problematic lineup procedures that have traditionally been used by law enforcement.

  15. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  16. Annoyance from transportation noise: relationships with exposure metrics DNL and DENL and their confidence intervals.

    PubMed Central

    Miedema, H M; Oudshoorn, C G

    2001-01-01

    We present a model of the distribution of noise annoyance with the mean varying as a function of the noise exposure. Day-night level (DNL) and day-evening-night level (DENL) were used as noise descriptors. Because the entire annoyance distribution has been modeled, any annoyance measure that summarizes this distribution can be calculated from the model. We fitted the model to data from noise annoyance studies for aircraft, road traffic, and railways separately. Polynomial approximations of relationships implied by the model for the combinations of the following exposure and annoyance measures are presented: DNL or DENL, and percentage "highly annoyed" (cutoff at 72 on a scale of 0-100), percentage "annoyed" (cutoff at 50 on a scale of 0-100), or percentage (at least) "a little annoyed" (cutoff at 28 on a scale of 0-100). These approximations are very good, and they are easier to use for practical calculations than the model itself, because the model involves a normal distribution. Our results are based on the same data set that was used earlier to establish relationships between DNL and percentage highly annoyed. In this paper we provide better estimates of the confidence intervals due to the improved model of the relationship between annoyance and noise exposure. Moreover, relationships using descriptors other than DNL and percentage highly annoyed, which are presented here, have not been established earlier on the basis of a large dataset. PMID:11335190

  17. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon.

    PubMed

    Conny, J M; Norris, G A; Gould, T R

    2009-03-09

    Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (sigma(Char)) and BC (sigma(BC)), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for sigma(BC), which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the sigma(BC) and sigma(Char) surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 degrees C and 850 degrees C as most suitable for the helium high-temperature step lasting 150s. However, a temperature as low as 750 degrees C could not be rejected statistically.

  18. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  19. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  20. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  1. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    PubMed

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  2. Using spatially explicit surveillance models to provide confidence in the eradication of an invasive ant

    PubMed Central

    Ward, Darren F.; Anderson, Dean P.; Barron, Mandy C.

    2016-01-01

    Effective detection plays an important role in the surveillance and management of invasive species. Invasive ants are very difficult to eradicate and are prone to imperfect detection because of their small size and cryptic nature. Here we demonstrate the use of spatially explicit surveillance models to estimate the probability that Argentine ants (Linepithema humile) have been eradicated from an offshore island site, given their absence across four surveys and three surveillance methods, conducted since ant control was applied. The probability of eradication increased sharply as each survey was conducted. Using all surveys and surveillance methods combined, the overall median probability of eradication of Argentine ants was 0.96. There was a high level of confidence in this result, with a high Credible Interval Value of 0.87. Our results demonstrate the value of spatially explicit surveillance models for the likelihood of eradication of Argentine ants. We argue that such models are vital to give confidence in eradication programs, especially from highly valued conservation areas such as offshore islands. PMID:27721491

  3. Concept analysis: confidence/self-confidence.

    PubMed

    Perry, Patricia

    2011-01-01

    Confidence and self-confidence are crucial practice elements in nursing education and practice. Nurse educators should have an understanding of the concept of confidence in order to assist in the accomplishment of nursing students and their learning of technical and nontechnical skills. With the aim of facilitating trusted care of patients in the healthcare setting, nursing professionals must exhibit confidence, and, as such, clarification and analysis of its meaning is necessary. The purpose of this analysis is to provide clarity to the meaning of the concept confidence/self-confidence, while gaining a more comprehensive understanding of its attributes, antecedents, and consequences. Walker and Avant's eight-step method of concept analysis was utilized for the framework of the analysis process with model, contrary, borderline, and related cases presented along with attributes, antecedents, consequences, and empirical referents identified. Understanding both the individualized development of confidence among prelicensure nursing students and the role of the nurse educator in the development of confident nursing practice, nurse educators can assist students in the development of confidence and competency. Future research surrounding the nature and development of confidence/self-confidence in the prelicensure nursing student experiencing human patient simulation sessions would assist to help educators further promote the development of confidence. © 2011 Wiley Periodicals, Inc.

  4. Out-of-range INR values and outcomes among new warfarin patients with non-valvular atrial fibrillation.

    PubMed

    Nelson, Winnie W; Wang, Li; Baser, Onur; Damaraju, Chandrasekharrao V; Schein, Jeffrey R

    2015-02-01

    Although efficacious in stroke prevention in non-valvular atrial fibrillation, many warfarin patients are sub-optimally managed. To evaluate the association of international normalized ratio control and clinical outcomes among new warfarin patients with non-valvular atrial fibrillation. Adult non-valvular atrial fibrillation patients (≥18 years) initiating warfarin treatment were selected from the US Veterans Health Administration dataset between 10/2007 and 9/2012. Valid international normalized ratio values were examined from the warfarin initiation date through the earlier of the first clinical outcome, end of warfarin exposure or death. Each patient contributed multiple in-range and out-of-range time periods. The relative risk ratios of clinical outcomes associated with international normalized ratio control were estimated. 34,346 patients were included for analysis. During the warfarin exposure period, the incidence of events per 100 person-years was highest when patients had international normalized ratio <2:13.66 for acute coronary syndrome; 10.30 for ischemic stroke; 2.93 for transient ischemic attack; 1.81 for systemic embolism; and 4.55 for major bleeding. Poisson regression confirmed that during periods with international normalized ratio <2, patients were at increased risk of developing acute coronary syndrome (relative risk ratio: 7.9; 95 % confidence interval 6.9-9.1), ischemic stroke (relative risk ratio: 7.6; 95 % confidence interval 6.5-8.9), transient ischemic attack (relative risk ratio: 8.2; 95 % confidence interval 6.1-11.2), systemic embolism (relative risk ratio: 6.3; 95 % confidence interval 4.4-8.9) and major bleeding (relative risk ratio: 2.6; 95 % confidence interval 2.2-3.0). During time periods with international normalized ratio >3, patients had significantly increased risk of major bleeding (relative risk ratio: 1.5; 95 % confidence interval 1.2-2.0). In a Veterans Health Administration non-valvular atrial fibrillation population

  5. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  6. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.

    2013-08-01

    To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results aboutmore » interval graphs to intervals over posets.« less

  7. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  8. Hematology and biochemistry reference intervals for Ontario commercial nursing pigs close to the time of weaning

    PubMed Central

    Perri, Amanda M.; O’Sullivan, Terri L.; Harding, John C.S.; Wood, R. Darren; Friendship, Robert M.

    2017-01-01

    The evaluation of pig hematology and biochemistry parameters is rarely done largely due to the costs associated with laboratory testing and labor, and the limited availability of reference intervals needed for interpretation. Within-herd and between-herd biological variation of these values also make it difficult to establish reference intervals. Regardless, baseline reference intervals are important to aid veterinarians in the interpretation of blood parameters for the diagnosis and treatment of diseased swine. The objective of this research was to provide reference intervals for hematology and biochemistry parameters of 3-week-old commercial nursing piglets in Ontario. A total of 1032 pigs lacking clinical signs of disease from 20 swine farms were sampled for hematology and iron panel evaluation, with biochemistry analysis performed on a subset of 189 randomly selected pigs. The 95% reference interval, mean, median, range, and 90% confidence intervals were calculated for each parameter. PMID:28373729

  9. Anchoring effects in the judgment of confidence: semantic or numeric priming?

    PubMed

    Carroll, Steven R; Petrusic, William M; Leth-Steensen, Craig

    2009-02-01

    Over the last decade, researchers have debated whether anchoring effects are the result of semantic or numeric priming. The present study tested both hypotheses. In four experiments involving a sensory detection task, participants first made a relative confidence judgment by deciding whether they were more or less confident than an anchor value in the correctness of their decision. Subsequently, they expressed an absolute level of confidence. In two of these experiments, the relative confidence anchor values represented the midpoints between the absolute confidence scale values, which were either explicitly numeric or semantic, nonnumeric representations of magnitude. In two other experiments, the anchor values were drawn from a scale modally different from that used to express the absolute confidence (i.e., nonnumeric and numeric, respectively, or vice versa). Regardless of the nature of the anchors, the mean confidence ratings revealed anchoring effects only when the relative and absolute confidence values were drawn from identical scales. Together, the results of these four experiments limit the conditions under which both numeric and semantic priming would be expected to lead to anchoring effects.

  10. EFFECT ON PERFUSION VALUES OF SAMPLING INTERVAL OF CT PERFUSION ACQUISITIONS IN NEUROENDOCRINE LIVER METASTASES AND NORMAL LIVER

    PubMed Central

    Ng, Chaan S.; Hobbs, Brian P.; Wei, Wei; Anderson, Ella F.; Herron, Delise H.; Yao, James C.; Chandler, Adam G.

    2014-01-01

    Objective To assess the effects of sampling interval (SI) of CT perfusion acquisitions on CT perfusion values in normal liver and liver metastases from neuroendocrine tumors. Methods CT perfusion in 16 patients with neuroendocrine liver metastases were analyzed by distributed parameter modeling to yield tissue blood flow, blood volume, mean transit time, permeability, and hepatic arterial fraction, for tumor and normal liver. CT perfusion values for the reference sampling interval of 0.5s (SI0.5) were compared with those of SI datasets of 1s, 2s, 3s and 4s, using mixed-effects model analyses. Results Increases in SI beyond 1s were associated with significant and increasing departures of CT perfusion parameters from reference values at SI0.5 (p≤0.0009). CT perfusion values deviated from reference with increasing uncertainty with increasing SIs. Findings for normal liver were concordant. Conclusion Increasing SIs beyond 1s yield significantly different CT perfusion parameter values compared to reference values at SI0.5. PMID:25626401

  11. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    PubMed

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  12. An interval-valued 2-tuple linguistic group decision-making model based on the Choquet integral operator

    NASA Astrophysics Data System (ADS)

    Liu, Bingsheng; Fu, Meiqing; Zhang, Shuibo; Xue, Bin; Zhou, Qi; Zhang, Shiruo

    2018-01-01

    The Choquet integral (IL) operator is an effective approach for handling interdependence among decision attributes in complex decision-making problems. However, the fuzzy measures of attributes and attribute sets required by IL are difficult to achieve directly, which limits the application of IL. This paper proposes a new method for determining fuzzy measures of attributes by extending Marichal's concept of entropy for fuzzy measure. To well represent the assessment information, interval-valued 2-tuple linguistic context is utilised to represent information. Then, we propose a Choquet integral operator in an interval-valued 2-tuple linguistic environment, which can effectively handle the correlation between attributes. In addition, we apply these methods to solve multi-attribute group decision-making problems. The feasibility and validity of the proposed operator is demonstrated by comparisons with other models in illustrative example part.

  13. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  14. Exact nonparametric confidence bands for the survivor function.

    PubMed

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  15. Retrieval monitoring is influenced by information value: the interplay between importance and confidence on false memory.

    PubMed

    McDonough, Ian M; Bui, Dung C; Friedman, Michael C; Castel, Alan D

    2015-10-01

    The perceived value of information can influence one's motivation to successfully remember that information. This study investigated how information value can affect memory search and evaluation processes (i.e., retrieval monitoring). In Experiment 1, participants studied unrelated words associated with low, medium, or high values. Subsequent memory tests required participants to selectively monitor retrieval for different values. False memory effects were smaller when searching memory for high-value than low-value words, suggesting that people more effectively monitored more important information. In Experiment 2, participants studied semantically-related words, and the need for retrieval monitoring was reduced at test by using inclusion instructions (i.e., endorsement of any word related to the studied words) compared with standard instructions. Inclusion instructions led to increases in false recognition for low-value, but not for high-value words, suggesting that under standard-instruction conditions retrieval monitoring was less likely to occur for important information. Experiment 3 showed that words retrieved with lower confidence were associated with more effective retrieval monitoring, suggesting that the quality of the retrieved memory influenced the degree and effectiveness of monitoring processes. Ironically, unless encouraged to do so, people were less likely to carefully monitor important information, even though people want to remember important memories most accurately. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Detecting Disease in Radiographs with Intuitive Confidence

    PubMed Central

    2015-01-01

    This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45°) represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned) input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays. PMID:26495433

  17. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions.

    PubMed

    Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith

    2017-01-01

    When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  18. Hematologic and serum biochemical reference intervals for free-ranging common bottlenose dolphins (Tursiops truncatus) and variation in the distributions of clinicopathologic values related to geographic sampling site.

    PubMed

    Schwacke, Lori H; Hall, Ailsa J; Townsend, Forrest I; Wells, Randall S; Hansen, Larry J; Hohn, Aleta A; Bossart, Gregory D; Fair, Patricia A; Rowles, Teresa K

    2009-08-01

    To develop robust reference intervals for hematologic and serum biochemical variables by use of data derived from free-ranging bottlenose dolphins (Tursiops truncatus) and examine potential variation in distributions of clinicopathologic values related to sampling sites' geographic locations. 255 free-ranging bottlenose dolphins. Data from samples collected during multiple bottlenose dolphin capture-release projects conducted at 4 southeastern US coastal locations in 2000 through 2006 were combined to determine reference intervals for 52 clinicopathologic variables. A nonparametric bootstrap approach was applied to estimate 95th percentiles and associated 90% confidence intervals; the need for partitioning by length and sex classes was determined by testing for differences in estimated thresholds with a bootstrap method. When appropriate, quantile regression was used to determine continuous functions for 95th percentiles dependent on length. The proportion of out-of-range samples for all clinicopathologic measurements was examined for each geographic site, and multivariate ANOVA was applied to further explore variation in leukocyte subgroups. A need for partitioning by length and sex classes was indicated for many clinicopathologic variables. For each geographic site, few significant deviations from expected number of out-of-range samples were detected. Although mean leukocyte counts did not vary among sites, differences in the mean counts for leukocyte subgroups were identified. Although differences in the centrality of distributions for some variables were detected, the 95th percentiles estimated from the pooled data were robust and applicable across geographic sites. The derived reference intervals provide critical information for conducting bottlenose dolphin population health studies.

  19. Reinforcement interval type-2 fuzzy controller design by online rule generation and q-value-aided ant colony optimization.

    PubMed

    Juang, Chia-Feng; Hsu, Chia-Hung

    2009-12-01

    This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.

  20. Confidence intervals for effect sizes: compliance and clinical significance in the Journal of Consulting and clinical Psychology.

    PubMed

    Odgaard, Eric C; Fowler, Robert L

    2010-06-01

    In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.

  1. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014)

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157–1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

  2. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    PubMed

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

  3. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  4. The predictive value of C-reactive protein (CRP) in acute pancreatitis - is interval change in CRP an additional indicator of severity?

    PubMed

    Stirling, Aaron D; Moran, Neil R; Kelly, Michael E; Ridgway, Paul F; Conlon, Kevin C

    2017-10-01

    Using revised Atlanta classification defined outcomes, we compare absolute values in C-reactive protein (CRP), with interval changes in CRP, for severity stratification in acute pancreatitis (AP). A retrospective study of all first incidence AP was conducted over a 5-year period. Interval change in CRP values from admission to day 1, 2 and 3 was compared against the absolute values. Receiver-operator characteristic (ROC) curve and likelihood ratios (LRs) were used to compare ability to predict severe and mild disease. 337 cases of first incidence AP were included in our analysis. ROC curve analysis demonstrated the second day as the most useful time for repeat CRP measurement. A CRP interval change >90 mg/dL at 48 h (+LR 2.15, -LR 0.26) was equivalent to an absolute value of >150 mg/dL within 48 h (+LR 2.32, -LR 0.25). The optimal cut-off for absolute CRP based on new, more stringent definition of severity was >190 mg/dL (+LR 2.72, -LR 0.24). Interval change in CRP is a comparable measure to absolute CRP in the prognostication of AP severity. This study suggests a rise of >90 mg/dL from admission or an absolute value of >190 mg/dL at 48 h predicts severe disease with the greatest accuracy. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  5. Analyses of laboratory data and establishment of reference values and intervals for healthy elderly people.

    PubMed

    Kubota, K; Kadomura, T; Ohta, K; Koyama, K; Okuda, H; Kobayashi, M; Ishii, C; Fujiwara, Y; Nishiora, T; Ohmae, Y; Ohmae, T; Kitajima, M

    2012-04-01

    Protein-energy malnutrition is a common disorder in the elderly. Although serum albumin is commonly used as a nutritional marker, data is lacking on serum albumin levels in the elderly. The purpose of this study was to determine whether serum albumin levels decrease with advancing age and to establish reference value and interval of laboratory data for elderly people (75 years and over). Blood samples from 13821 healthy people, 42064 outpatients, and 15959 inpatients were collected during 2008. Blood from 127 of our nutrition support team (NST) patients was also collected during August 2006 and May 2009, and analyzed. Serum albumin, hemoglobin, total cholesterol levels and lymphocyte count were determined. We analyzed the change in each parameter in accordance with age, compared the data for elderly people with younger people, and established new reference values. Clinical outcomes were examined depending on the improved reference values. Albumin was lower in older persons than in younger persons. The estimated reference value and interval were 42 (48-36) g/l in older persons and was much lower in NST patients. Hemoglobin was decreased while cholesterol and lymphocyte count were not changed in older persons: all were markedly decreased in NST patients. Terms of hospital stay were significantly longer and mortality rates were significantly higher in older persons, comparing from above to below using a new reference value of albumin (36 g/l). The serum albumin level decreases with advancing age, but it was maintained to some extent in healthy older people. Serum albumin levels related to the clinical outcome. Hemoglobin and cholesterol levels and lymphocyte count were all lower in NST patients. These measurements may be valuable markers of nutritional status and can help in guiding the need for nutritional support.

  6. Confidence assignment for mass spectrometry based peptide identifications via the extreme value distribution.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2016-09-01

    There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  7. The health impacts and economic value of wildland fire ...

    EPA Pesticide Factsheets

    Introduction: Wildland fires degrade regional air quality and adversely affect human health. A growing body of epidemiology literature report increased rates of emergency department, hospital admission and premature deaths from wildfire smoke exposure. Objective: Our research aimed to characterized excess mortality and morbidity events, and the economic value of these impacts, from wildland fire smoke exposure in the U.S over a multi-year period; to date no other burden assessment has done this. Methods: We first completed a systematic review of the epidemiologic literature and then performed photochemical air quality modeling for the years 2008 to 2012 in the Continental U.S. Finally, we estimated the morbidity, mortality, and economic burden of wildland fires. Results: Our models suggest that areas including northern California, Oregon and Idaho in the West, and Florida, Louisiana and Georgia in the East were most affected by wildland fire events in the form of additional premature deaths and respiratory hospital admissions. We estimated the economic value of these cases due to short term exposures as being between $11 and $20B (2010$) per year, with a net present value of $63B (95% confidence intervals $6-$170); we estimate the value of long- term exposures as being between $76 and $130B (2010$) per year, with a net present value of $450B (95% confidence intervals $42-$1,200). Conclusion: The public health burden of wildland fires-in terms of the number and

  8. Substitution of (R,S)-methadone by (R)-methadone: Impact on QTc interval.

    PubMed

    Ansermot, Nicolas; Albayrak, Ozgür; Schläpfer, Jürg; Crettol, Séverine; Croquette-Krokar, Marina; Bourquin, Michel; Déglon, Jean-Jacques; Faouzi, Mohamed; Scherbaum, Norbert; Eap, Chin B

    2010-03-22

    Methadone is administered as a chiral mixture of (R,S)-methadone. The opioid effect is mainly mediated by (R)-methadone, whereas (S)-methadone blocks the human ether-à-go-go-related gene (hERG) voltage-gated potassium channel more potently, which can cause drug-induced long QT syndrome, leading to potentially lethal ventricular tachyarrhythmias. To investigate whether substitution of (R,S)-methadone by (R)-methadone could reduce the corrected QT (QTc) interval, (R,S)-methadone was replaced by (R)-methadone (half-dose) in 39 opioid-dependent patients receiving maintenance treatment for 14 days. (R)-methadone was then replaced by the initial dose of (R,S)-methadone for 14 days (n = 29). Trough (R)-methadone and (S)-methadone plasma levels and electrocardiogram measurements were taken. The Fridericia-corrected QT (QTcF) interval decreased when (R,S)-methadone was replaced by a half-dose of (R)-methadone; the median (interquartile range [IQR]) values were 423 (398-440) milliseconds (ms) and 412 (395-431) ms (P = .06) at days 0 and 14, respectively. Using a univariate mixed-effect linear model, the QTcF value decreased by a mean of -3.9 ms (95% confidence interval [CI], -7.7 to -0.2) per week (P = .04). The QTcF value increased when (R)-methadone was replaced by the initial dose of (R,S)-methadone for 14 days; median (IQR) values were 424 (398-436) ms and 424 (412-443) ms (P = .01) at days 14 and 28, respectively. The univariate model showed that the QTcF value increased by a mean of 4.7 ms (95% CI, 1.3-8.1) per week (P = .006). Substitution of (R,S)-methadone by (R)-methadone reduces the QTc interval value. A safer cardiac profile of (R)-methadone is in agreement with previous in vitro and pharmacogenetic studies. If the present results are confirmed by larger studies, (R)-methadone should be prescribed instead of (R,S)-methadone to reduce the risk of cardiac toxic effects and sudden death.

  9. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    PubMed

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  10. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  11. Reliable prediction intervals with regression neural networks.

    PubMed

    Papadopoulos, Harris; Haralambous, Haris

    2011-10-01

    This paper proposes an extension to conventional regression neural networks (NNs) for replacing the point predictions they produce with prediction intervals that satisfy a required level of confidence. Our approach follows a novel machine learning framework, called Conformal Prediction (CP), for assigning reliable confidence measures to predictions without assuming anything more than that the data are independent and identically distributed (i.i.d.). We evaluate the proposed method on four benchmark datasets and on the problem of predicting Total Electron Content (TEC), which is an important parameter in trans-ionospheric links; for the latter we use a dataset of more than 60000 TEC measurements collected over a period of 11 years. Our experimental results show that the prediction intervals produced by our method are both well calibrated and tight enough to be useful in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. A two-stage mixed-integer fuzzy programming with interval-valued membership functions approach for flood-diversion planning.

    PubMed

    Wang, S; Huang, G H

    2013-03-15

    Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Multiple confidence estimates as indices of eyewitness memory.

    PubMed

    Sauer, James D; Brewer, Neil; Weber, Nathan

    2008-08-01

    Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated. Classification algorithms were applied to participants' confidence data to determine when a confidence value or pattern of confidence values indicated a positive response. Experiment 1 compared confidence group classification accuracy with a binary decision control group's accuracy on a standard old-new face recognition task and found superior accuracy for the confidence group for target-absent trials but not for target-present trials. Experiment 2 used a face mini-lineup task and found reduced target-present accuracy offset by large gains in target-absent accuracy. Using a standard lineup paradigm, Experiments 3 and 4 also found improved classification accuracy for target-absent lineups and, with a more sophisticated algorithm, for target-present lineups. This demonstrates the accessibility of evidence for recognition memory decisions and points to a more sensitive index of memory quality than is afforded by binary decisions.

  14. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    PubMed

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

  15. Normative values for the unipedal stance test with eyes open and closed.

    PubMed

    Springer, Barbara A; Marin, Raul; Cyhan, Tamara; Roberts, Holly; Gill, Norman W

    2007-01-01

    Limited normative data are available for the unipedal stance test (UPST), making it difficult for clinicians to use it confidently to detect subtle balance impairments. The purpose of this study was to generate normative values for repeated trials of the UPST with eyes opened and eyes closed across age groups and gender. This prospective, mixed-model design was set in a tertiary care medical center. Healthy subjects (n= 549), 18 years or older, performed the UPST with eyes open and closed. Mean and best of 3 UPST times for males and females of 6 age groups (18-39, 40-49, 50-59, 60-69, 70-79, and 80+) were documented and inter-rater reliability was tested. There was a significant age dependent decrease in UPST time during both conditions. Inter-rater reliability for the best of 3 trials was determined to be excellent with an intra-class correlation coefficient of 0.994 (95% confidence interval 0.989-0.996) for eyes open and 0.998 (95% confidence interval 0.996-0.999) for eyes closed. This study adds to the understanding of typical performance on the UPST. Performance is age-specific and not related to gender. Clinicians now have more extensive normative values to which individuals can be compared.

  16. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  17. BRIDGING GAPS BETWEEN ZOO AND WILDLIFE MEDICINE: ESTABLISHING REFERENCE INTERVALS FOR FREE-RANGING AFRICAN LIONS (PANTHERA LEO).

    PubMed

    Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna

    2017-06-01

    The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.

  18. Contraceptive confidence and timing of first birth in Moldova: an event history analysis of retrospective data.

    PubMed

    Lyons-Amos, Mark; Padmadas, Sabu S; Durrant, Gabriele B

    2014-08-11

    To test the contraceptive confidence hypothesis in a modern context. The hypothesis is that women using effective or modern contraceptive methods have increased contraceptive confidence and hence a shorter interval between marriage and first birth than users of ineffective or traditional methods. We extend the hypothesis to incorporate the role of abortion, arguing that it acts as a substitute for contraception in the study context. Moldova, a country in South-East Europe. Moldova exhibits high use of traditional contraceptive methods and abortion compared with other European countries. Data are from a secondary analysis of the 2005 Moldovan Demographic and Health Survey, a nationally representative sample survey. 5377 unmarried women were selected. The outcome measure was the interval between marriage and first birth. This was modelled using a piecewise-constant hazard regression, with abortion and contraceptive method types as primary variables along with relevant sociodemographic controls. Women with high contraceptive confidence (modern method users) have a higher cumulative hazard of first birth 36 months following marriage (0.88 (0.87 to 0.89)) compared with women with low contraceptive confidence (traditional method users, cumulative hazard: 0.85 (0.84 to 0.85)). This is consistent with the contraceptive confidence hypothesis. There is a higher cumulative hazard of first birth among women with low (0.80 (0.79 to 0.80)) and moderate abortion propensities (0.76 (0.75 to 0.77)) than women with no abortion propensity (0.73 (0.72 to 0.74)) 24 months after marriage. Effective contraceptive use tends to increase contraceptive confidence and is associated with a shorter interval between marriage and first birth. Increased use of abortion also tends to increase contraceptive confidence and shorten birth duration, although this effect is non-linear-women with a very high use of abortion tend to have lengthy intervals between marriage and first birth. Published by

  19. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  20. The range of confidence scales does not affect the relationship between confidence and accuracy in recognition memory.

    PubMed

    Tekin, Eylul; Roediger, Henry L

    2017-01-01

    Researchers use a wide range of confidence scales when measuring the relationship between confidence and accuracy in reports from memory, with the highest number usually representing the greatest confidence (e.g., 4-point, 20-point, and 100-point scales). The assumption seems to be that the range of the scale has little bearing on the confidence-accuracy relationship. In two old/new recognition experiments, we directly investigated this assumption using word lists (Experiment 1) and faces (Experiment 2) by employing 4-, 5-, 20-, and 100-point scales. Using confidence-accuracy characteristic (CAC) plots, we asked whether confidence ratings would yield similar CAC plots, indicating comparability in use of the scales. For the comparisons, we divided 100-point and 20-point scales into bins of either four or five and asked, for example, whether confidence ratings of 4, 16-20, and 76-100 would yield similar values. The results show that, for both types of material, the different scales yield similar CAC plots. Notably, when subjects express high confidence, regardless of which scale they use, they are likely to be very accurate (even though they studied 100 words and 50 faces in each list in 2 experiments). The scales seem convertible from one to the other, and choice of scale range probably does not affect research into the relationship between confidence and accuracy. High confidence indicates high accuracy in recognition in the present experiments.

  1. The 2012 Retirement Confidence Survey: job insecurity, debt weigh on retirement confidence, savings.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2012-03-01

    Americans' confidence in their ability to retire comfortably is stagnant at historically low levels. Just 14 percent are very confident they will have enough money to live comfortably in retirement (statistically equivalent to the low of 13 percent measured in 2011 and 2009). Employment insecurity looms large: Forty-two percent identify job uncertainty as the most pressing financial issue facing most Americans today. Worker confidence about having enough money to pay for medical expenses and long-term care expenses in retirement remains well below their confidence levels for paying basic expenses. Many workers report they have virtually no savings and investments. In total, 60 percent of workers report that the total value of their household's savings and investments, excluding the value of their primary home and any defined benefit plans, is less than $25,000. Twenty-five percent of workers in the 2012 Retirement Confidence Survey say the age at which they expect to retire has changed in the past year. In 1991, 11 percent of workers said they expected to retire after age 65, and by 2012 that has grown to 37 percent. Regardless of those retirement age expectations, and consistent with prior RCS findings, half of current retirees surveyed say they left the work force unexpectedly due to health problems, disability, or changes at their employer, such as downsizing or closure. Those already in retirement tend to express higher levels of confidence than current workers about several key financial aspects of retirement. Retirees report they are significantly more reliant on Social Security as a major source of their retirement income than current workers expect to be. Although 56 percent of workers expect to receive benefits from a defined benefit plan in retirement, only 33 percent report that they and/or their spouse currently have such a benefit with a current or previous employer. More than half of workers (56 percent) report they and/or their spouse have not tried

  2. Expressing Intervals in Automated Service Negotiation

    NASA Astrophysics Data System (ADS)

    Clark, Kassidy P.; Warnier, Martijn; van Splunter, Sander; Brazier, Frances M. T.

    During automated negotiation of services between autonomous agents, utility functions are used to evaluate the terms of negotiation. These terms often include intervals of values which are prone to misinterpretation. It is often unclear if an interval embodies a continuum of real numbers or a subset of natural numbers. Furthermore, it is often unclear if an agent is expected to choose only one value, multiple values, a sub-interval or even multiple sub-intervals. Additional semantics are needed to clarify these issues. Normally, these semantics are stored in a domain ontology. However, ontologies are typically domain specific and static in nature. For dynamic environments, in which autonomous agents negotiate resources whose attributes and relationships change rapidly, semantics should be made explicit in the service negotiation. This paper identifies issues that are prone to misinterpretation and proposes a notation for expressing intervals. This notation is illustrated using an example in WS-Agreement.

  3. A multicenter study on PIVKA reference interval of healthy population and establishment of PIVKA cutoff value for hepatocellular carcinoma diagnosis in China.

    PubMed

    Qin, X; Tang, G; Gao, R; Guo, Z; Liu, Z; Yu, S; Chen, M; Tao, Z; Li, S; Liu, M; Wang, L; Hou, L; Xia, L; Cheng, X; Han, J; Qiu, L

    2017-08-01

    The aim of this study was to investigate the reference interval of protein-induced vitamin K absence or antagonist-II (PIVKA-II) in China population and to evaluate its medical decision level for hepatocellular carcinoma (HCC) diagnosis. To determine the reference range for Chinese individuals, a total of 855 healthy subjects in five typical regions of China were enrolled in this study to obtain a 95% reference interval. In a case-control study which recruited the subjects diagnosed with HCC, metastatic liver cancer, bile duct cancer, hepatitis, cirrhosis, other benign liver diseases and the subjects administrated anticoagulant, receiver operating characteristic analysis was used to determine PIVKA-II cutoff value for a medical decision. The concentration of PIVKA-II had no relationship with age or gender and that region was a significant factor associated with the level of PIVKA-II. The 95% reference interval determined in this study for PIVKA-II in Chinese healthy individuals was 28 mAU/mL, and the cutoff value which to distinguish patients with HCC from disease control groups is 36.5 mAU/mL. In clinical applications, it is recommended that each laboratory chooses their own reference interval based on the regional population study or cutoff value for disease diagnosis. © 2017 John Wiley & Sons Ltd.

  4. Is the P-Value Really Dead? Assessing Inference Learning Outcomes for Social Science Students in an Introductory Statistics Course

    ERIC Educational Resources Information Center

    Lane-Getaz, Sharon

    2017-01-01

    In reaction to misuses and misinterpretations of p-values and confidence intervals, a social science journal editor banned p-values from its pages. This study aimed to show that education could address misuse and abuse. This study examines inference-related learning outcomes for social science students in an introductory course supplemented with…

  5. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    PubMed Central

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  6. Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertainties.

    PubMed

    Song, Qiankun; Yu, Qinqin; Zhao, Zhenjiang; Liu, Yurong; Alsaadi, Fuad E

    2018-07-01

    In this paper, the boundedness and robust stability for a class of delayed complex-valued neural networks with interval parameter uncertainties are investigated. By using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks. The obtained robust stability criterion is expressed in complex-valued LMI, which can be calculated numerically using YALMIP with solver of SDPT3 in MATLAB. An example with simulations is supplied to show the applicability and advantages of the acquired result. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Microvascular anastomosis simulation using a chicken thigh model: Interval versus massed training.

    PubMed

    Schoeff, Stephen; Hernandez, Brian; Robinson, Derek J; Jameson, Mark J; Shonka, David C

    2017-11-01

    To compare the effectiveness of massed versus interval training when teaching otolaryngology residents microvascular suturing on a validated microsurgical model. Otolaryngology residents were placed into interval (n = 7) or massed (n = 7) training groups. The interval group performed three separate 30-minute practice sessions separated by at least 1 week, and the massed group performed a single 90-minute practice session. Both groups viewed a video demonstration and recorded a pretest prior to the first training session. A post-test was administered following the last practice session. At an academic medical center, 14 otolaryngology residents were assigned using stratified randomization to interval or massed training. Blinded evaluators graded performance using a validated microvascular Objective Structured Assessment of Technical Skill tool. The tool is comprised of two major components: task-specific score (TSS) and global rating scale (GRS). Participants also received pre- and poststudy surveys to compare subjective confidence in multiple aspects of microvascular skill acquisition. Overall, all residents showed increased TSS and GRS on post- versus pretest. After completion of training, the interval group had a statistically significant increase in both TSS and GRS, whereas the massed group's increase was not significant. Residents in both groups reported significantly increased levels of confidence after completion of the study. Self-directed learning using a chicken thigh artery model may benefit microsurgical skills, competence, and confidence for resident surgeons. Interval training results in significant improvement in early development of microvascular anastomosis skills, whereas massed training does not. NA. Laryngoscope, 127:2490-2494, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Two sides of the same coin: Monetary incentives concurrently improve and bias confidence judgments.

    PubMed

    Lebreton, Maël; Langdon, Shari; Slieker, Matthijs J; Nooitgedacht, Jip S; Goudriaan, Anna E; Denys, Damiaan; van Holst, Ruth J; Luigjes, Judy

    2018-05-01

    Decisions are accompanied by a feeling of confidence, that is, a belief about the decision being correct. Confidence accuracy is critical, notably in high-stakes situations such as medical or financial decision-making. We investigated how incentive motivation influences confidence accuracy by combining a perceptual task with a confidence incentivization mechanism. By varying the magnitude and valence (gains or losses) of monetary incentives, we orthogonalized their motivational and affective components. Corroborating theories of rational decision-making and motivation, our results first reveal that the motivational value of incentives improves aspects of confidence accuracy. However, in line with a value-confidence interaction hypothesis, we further show that the affective value of incentives concurrently biases confidence reports, thus degrading confidence accuracy. Finally, we demonstrate that the motivational and affective effects of incentives differentially affect how confidence builds on perceptual evidence. Together, these findings may provide new hints about confidence miscalibration in healthy or pathological contexts.

  9. Two sides of the same coin: Monetary incentives concurrently improve and bias confidence judgments

    PubMed Central

    Lebreton, Maël; Slieker, Matthijs J.; Nooitgedacht, Jip S.; van Holst, Ruth J.; Luigjes, Judy

    2018-01-01

    Decisions are accompanied by a feeling of confidence, that is, a belief about the decision being correct. Confidence accuracy is critical, notably in high-stakes situations such as medical or financial decision-making. We investigated how incentive motivation influences confidence accuracy by combining a perceptual task with a confidence incentivization mechanism. By varying the magnitude and valence (gains or losses) of monetary incentives, we orthogonalized their motivational and affective components. Corroborating theories of rational decision-making and motivation, our results first reveal that the motivational value of incentives improves aspects of confidence accuracy. However, in line with a value-confidence interaction hypothesis, we further show that the affective value of incentives concurrently biases confidence reports, thus degrading confidence accuracy. Finally, we demonstrate that the motivational and affective effects of incentives differentially affect how confidence builds on perceptual evidence. Together, these findings may provide new hints about confidence miscalibration in healthy or pathological contexts. PMID:29854944

  10. Confidence-Based Feature Acquisition

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

    2010-01-01

    Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

  11. Hematology and serum clinical chemistry reference intervals for free-ranging Scandinavian gray wolves (Canis lupus).

    PubMed

    Thoresen, Stein I; Arnemo, Jon M; Liberg, Olof

    2009-06-01

    Scandinavian free-ranging wolves (Canis lupus) are endangered, such that laboratory data to assess their health status is increasingly important. Although wolves have been studied for decades, most biological information comes from captive animals. The objective of the present study was to establish reference intervals for 30 clinical chemical and 8 hematologic analytes in Scandinavian free-ranging wolves. All wolves were tracked and chemically immobilized from a helicopter before examination and blood sampling in the winter of 7 consecutive years (1998-2004). Seventy-nine blood samples were collected from 57 gray wolves, including 24 juveniles (24 samples), 17 adult females (25 samples), and 16 adult males (30 samples). Whole blood and serum samples were stored at refrigeration temperature for 1-3 days before hematologic analyses and for 1-5 days before serum biochemical analyses. Reference intervals were calculated as 95% confidence intervals except for juveniles where the minimum and maximum values were used. Significant differences were observed between adult and juvenile wolves for RBC parameters, alkaline phosphatase and amylase activities, and total protein, albumin, gamma-globulins, cholesterol, creatinine, calcium, chloride, magnesium, phosphate, and sodium concentrations. Compared with published reference values for captive wolves, reference intervals for free-ranging wolves reflected exercise activity associated with capture (higher creatine kinase activity, higher glucose concentration), and differences in nutritional status (higher urea concentration).

  12. Symbol lock detection implemented with nonoverlapping integration intervals

    NASA Technical Reports Server (NTRS)

    Shihabi, Mazen M. (Inventor); Hinedi, Sami M. (Inventor); Shah, Biren N. (Inventor)

    1995-01-01

    A symbol lock detector is introduced for an incoming coherent digital communication signal which utilizes a subcarrier modulated with binary symbol data, d(sub k), and known symbol interval T by integrating binary values of the signal over nonoverlapping first and second intervals selected to be T/2, delaying the first integral an interval T/2, and either summing or multiplying the second integral with the first one that preceded it to form a value X(sub k). That value is then averaged over a number M of symbol intervals to produce a static value Y. A symbol lock decision can then be made when the static value Y exceeds a threshold level delta.

  13. Self-reported confidence in recall as a predictor of validity and repeatability of physical activity questionnaire data.

    PubMed

    Cust, Anne E; Armstrong, Bruce K; Smith, Ben J; Chau, Josephine; van der Ploeg, Hidde P; Bauman, Adrian

    2009-05-01

    Self-reported confidence ratings have been used in other research disciplines as a tool to assess data quality, and may be useful in epidemiologic studies. We examined whether self-reported confidence in recall of physical activity was a predictor of the validity and retest reliability of physical activity measures from the European Prospective Investigation into Cancer and Nutrition (EPIC) past-year questionnaire and the International Physical Activity Questionnaire (IPAQ) last-7-day questionnaire.During 2005-2006 in Sydney, Australia, 97 men and 80 women completed both questionnaires at baseline and at 10 months and wore an accelerometer as an objective comparison measure for three 7-day periods during the same timeframe. Participants rated their confidence in recalling physical activity for each question using a 5-point scale and were dichotomized at the median confidence value. Participants in the high-confidence group had higher validity and repeatability coefficients than those in the low-confidence group for most comparisons. The differences were most apparent for validity of IPAQ moderate activity: Spearman correlation rho = 0.34 (95% confidence interval [CI] = 0.08 to 0.55) and 0.01 (-0.17 to 0.20) for high- and low-confidence groups, respectively; and repeatability of EPIC household activity: rho = 0.81 (0.72 to 0.87) and 0.63 (0.48 to 0.74), respectively, and IPAQ vigorous activity: rho = 0.58 (0.43 to 0.70) and 0.29 (0.07 to 0.49), respectively. Women were less likely than men to report high recall confidence of past-year activity (adjusted odds ratio = 0.38; 0.18 to 0.80). Confidence ratings could be useful as indicators of recall accuracy (ie, validity and repeatability) of physical activity measures, and possibly for detecting differential measurement error and identifying questionnaire items that require improvement.

  14. Obtaining appropriate interval estimates for age when multiple indicators are used: evaluation of an ad-hoc procedure.

    PubMed

    Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick

    2016-03-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.

  15. Decision time and confidence predict choosers' identification performance in photographic showups

    PubMed Central

    Sagana, Anna; Sporer, Siegfried L.; Wixted, John T.

    2018-01-01

    In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence. PMID:29346394

  16. Decision time and confidence predict choosers' identification performance in photographic showups.

    PubMed

    Sauerland, Melanie; Sagana, Anna; Sporer, Siegfried L; Wixted, John T

    2018-01-01

    In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence.

  17. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

    PubMed

    Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

    2015-03-01

    We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.

  18. Hypercorrection of high confidence errors in lexical representations.

    PubMed

    Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa

    2013-08-01

    Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly.

  19. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  20. Predictor sort sampling and one-sided confidence bounds on quantiles

    Treesearch

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  1. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  2. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

    NASA Astrophysics Data System (ADS)

    Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

    2011-10-01

    Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

  3. Activities-specific balance confidence scale for predicting future falls in Indian older adults.

    PubMed

    Moiz, Jamal Ali; Bansal, Vishal; Noohu, Majumi M; Gaur, Shailendra Nath; Hussain, Mohammad Ejaz; Anwer, Shahnawaz; Alghadir, Ahmad

    2017-01-01

    Activities-specific balance confidence (ABC) scale is a subjective measure of confidence in performing various ambulatory activities without falling or experiencing a sense of unsteadiness. This study aimed to examine the ability of the Hindi version of the ABC scale (ABC-H scale) to discriminate between fallers and non-fallers and to examine its predictive validity for prospective falls. This was a prospective cohort study. A total of 125 community-dwelling older adults (88 were men) completed the ABC-H scale. The occurrence of falls over the follow-up period of 12 months was recorded. Discriminative validity was analyzed by comparing the total ABC-H scale scores between the faller and non-faller groups. A receiver operating characteristic curve analysis and a logistic regression analysis were used to examine the predictive accuracy of the ABC-H scale. The mean ABC-H scale score of the faller group was significantly lower than that of the non-faller group (52.6±8.1 vs 73.1±12.2; P <0.001). The optimal cutoff value for distinguishing faller and non-faller adults was ≤58.13. The sensitivity, specificity, area under the curve, and positive and negative likelihood ratios of the cutoff score were 86.3%, 87.3%, 0.91 ( P <0.001), 6.84, and 0.16, respectively. The percentage test accuracy and false-positive and false-negative rates were 86.87%, 12.2%, and 13.6%, respectively. A dichotomized total ABC-H scale score of ≤58.13% (adjusted odds ratio =0.032, 95% confidence interval =0.004-0.25, P =0.001) was significantly related with future falls. The ABC-H scores were significantly and independently related with future falls in the community-dwelling Indian older adults. The ability of the ABC-H scale to predict future falls was adequate with high sensitivity and specificity values.

  4. Fuzzy time series forecasting model with natural partitioning length approach for predicting the unemployment rate under different degree of confidence

    NASA Astrophysics Data System (ADS)

    Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud

    2017-08-01

    Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.

  5. After p Values: The New Statistics for Undergraduate Neuroscience Education.

    PubMed

    Calin-Jageman, Robert J

    2017-01-01

    Statistical inference is a methodological cornerstone for neuroscience education. For many years this has meant inculcating neuroscience majors into null hypothesis significance testing with p values. There is increasing concern, however, about the pervasive misuse of p values. It is time to start planning statistics curricula for neuroscience majors that replaces or de-emphasizes p values. One promising alternative approach is what Cumming has dubbed the "New Statistics", an approach that emphasizes effect sizes, confidence intervals, meta-analysis, and open science. I give an example of the New Statistics in action and describe some of the key benefits of adopting this approach in neuroscience education.

  6. Modified Dempster-Shafer approach using an expected utility interval decision rule

    NASA Astrophysics Data System (ADS)

    Cheaito, Ali; Lecours, Michael; Bosse, Eloi

    1999-03-01

    The combination operation of the conventional Dempster- Shafer algorithm has a tendency to increase exponentially the number of propositions involved in bodies of evidence by creating new ones. The aim of this paper is to explore a 'modified Dempster-Shafer' approach of fusing identity declarations emanating form different sources which include a number of radars, IFF and ESM systems in order to limit the explosion of the number of propositions. We use a non-ad hoc decision rule based on the expected utility interval to select the most probable object in a comprehensive Platform Data Base containing all the possible identity values that a potential target may take. We study the effect of the redistribution of the confidence levels of the eliminated propositions which otherwise overload the real-time data fusion system; these eliminated confidence levels can in particular be assigned to ignorance, or uniformly added to the remaining propositions and to ignorance. A scenario has been selected to demonstrate the performance of our modified Dempster-Shafer method of evidential reasoning.

  7. Variation in polyp size estimation among endoscopists and impact on surveillance intervals.

    PubMed

    Chaptini, Louis; Chaaya, Adib; Depalma, Fedele; Hunter, Krystal; Peikin, Steven; Laine, Loren

    2014-10-01

    Accurate estimation of polyp size is important because it is used to determine the surveillance interval after polypectomy. To evaluate the variation and accuracy in polyp size estimation among endoscopists and the impact on surveillance intervals after polypectomy. Web-based survey. A total of 873 members of the American Society for Gastrointestinal Endoscopy. Participants watched video recordings of 4 polypectomies and were asked to estimate the polyp sizes. Proportion of participants with polyp size estimates within 20% of the correct measurement and the frequency of incorrect surveillance intervals based on inaccurate size estimates. Polyp size estimates were within 20% of the correct value for 1362 (48%) of 2812 estimates (range 39%-59% for the 4 polyps). Polyp size was overestimated by >20% in 889 estimates (32%, range 15%-49%) and underestimated by >20% in 561 (20%, range 4%-46%) estimates. Incorrect surveillance intervals because of overestimation or underestimation occurred in 272 (10%) of the 2812 estimates (range 5%-14%). Participants in a private practice setting overestimated the size of 3 or of all 4 polyps by >20% more often than participants in an academic setting (difference = 7%; 95% confidence interval, 1%-11%). Survey design with the use of video clips. Substantial overestimation and underestimation of polyp size occurs with visual estimation leading to incorrect surveillance intervals in 10% of cases. Our findings support routine use of measurement tools to improve polyp size estimates. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  8. The influence of interpregnancy interval on infant mortality.

    PubMed

    McKinney, David; House, Melissa; Chen, Aimin; Muglia, Louis; DeFranco, Emily

    2017-03-01

    -<6 months (adjusted relative risk, 1.32; 95% confidence interval, 1.17-1.49) followed by interpregnancy intervals of 6-<12 months (adjusted relative risk, 1.16; 95% confidence interval, 1.04-1.30). Analysis stratified by maternal race revealed similar findings. Attributable risk calculation showed that 24.2% of infant mortalities following intervals of 0-<6 months and 14.1% with intervals of 6-<12 months are attributable to the short interpregnancy interval. By avoiding short interpregnancy intervals of ≤12 months we estimate that in the state of Ohio 31 infant mortalities (20 white and 8 black) per year could have been prevented and the infant mortality rate could have been reduced from 7.2-7.0 during this time frame. An interpregnancy interval of 12-60 months (1-5 years) between birth and conception of next pregnancy is associated with lowest risk of infant mortality. Public health initiatives and provider counseling to optimize birth spacing has the potential to significantly reduce infant mortality for both white and black mothers. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines.

    PubMed

    Gilkey, Melissa B; McRee, Annie-Laurie; Magnus, Brooke E; Reiter, Paul L; Dempsey, Amanda F; Brewer, Noel T

    2016-01-01

    To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents. We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children's vaccination status for vaccines including measles, mumps, and rubella (MMR), varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents' mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. A substantial minority of parents reported a history of vaccine refusal (15%) or delay (27%). Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54-0.63) as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76-0.86). Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40-1.68), varicella (OR = 1.54, 95% CI, 1.42-1.66), and flu vaccines (OR = 1.32, 95% CI, 1.23-1.42). Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children.

  10. Description and Evaluation of an Educational Intervention on Health Care Costs and Value.

    PubMed

    Jonas, Jennifer A; Ronan, Jeanine C; Petrie, Ian; Fieldston, Evan S

    2016-02-01

    There is growing consensus that to ensure that health care dollars are spent efficiently, physicians need more training in how to provide high-value, cost-conscious care. Thus, in fiscal year 2014, The Children's Hospital of Philadelphia piloted a 9-part curriculum on health care costs and value for faculty in the Division of General Pediatrics. This study uses baseline and postintervention surveys to gauge knowledge, perceptions, and views on these issues and to assess the efficacy of the pilot curriculum. Faculty completed surveys about their knowledge and perceptions about health care costs and value and their views on the role physicians should play in containing costs and promoting value. Baseline and postintervention responses were compared and analyzed on the basis of how many of the sessions respondents attended. Sixty-two faculty members completed the baseline survey (71% response rate), and 45 faculty members completed the postintervention survey (63% response rate). Reported knowledge of health care costs and value increased significantly in the postintervention survey (P=.04 and P<.001). Odds of being knowledgeable about costs and value were 2.42 (confidence interval: 1.05-5.58) and 6.22 times greater (confidence interval: 2.29-16.90), respectively, postintervention. Reported knowledge of health care costs and value increased with number of sessions attended (P=.01 and P<.001). The pilot curriculum appeared to successfully introduce physicians to concepts around health care costs and value and initiated important discussions about the role physicians can play in containing costs and promoting value. Additional education, increased cost transparency, and more decision support tools are needed to help physicians translate knowledge into practice. Copyright © 2016 by the American Academy of Pediatrics.

  11. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Confidence bands for measured economically optimal nitrogen rates

    USDA-ARS?s Scientific Manuscript database

    While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...

  13. Nutritive Value Response of Native Warm-Season Forage Grasses to Harvest Intervals and Durations in Mixed Stands

    PubMed Central

    Temu, Vitalis W.; Rude, Brian J.; Baldwin, Brian S.

    2014-01-01

    Interest in management of native warm-season grasses for multiple uses is growing in southeastern USA. Forage quality response of early-succession mixed stands of big bluestem (BB, Andropogon gerardii), indiangrass (IG, Sorghastrum nutans), and little bluestem (SG, Schizachyrium scoparium) to harvest intervals (30-, 40-, 60-, 90 or 120-d) and durations (one or two years) were assessed in crop-field buffers. Over three years, phased harvestings were initiated in May, on sets of randomized plots, ≥90 cm apart, in five replications (blocks) to produce one-, two-, and three-year-old stands, by the third year. Whole-plot regrowths were machine-harvested after collecting species (IG and LB) sample tillers for leafiness estimates. Species-specific leaf area (SLA) and leaf-to-stem ratio (LSR) were greater for early-season harvests and shorter intervals. In a similar pattern, whole-plot crude protein concentrations were greatest for the 30-d (74 g·kg−1 DM) and the least (40 g·kg−1 DM) for the 120-d interval. Corresponding neutral detergent fiber (NDF) values were the lowest (620 g·kg−1 DM) and highest (710 g·kg−1 DM), respectively. In vitro dry matter and NDF digestibility were greater for early-season harvests at shorter intervals (63 and 720 g·kg−1 DM). With strategic harvesting, similar stands may produce quality hay for beef cattle weight gain. PMID:27135504

  14. Prognostic value of fasting versus nonfasting low-density lipoprotein cholesterol levels on long-term mortality: insight from the National Health and Nutrition Examination Survey III (NHANES-III).

    PubMed

    Doran, Bethany; Guo, Yu; Xu, Jinfeng; Weintraub, Howard; Mora, Samia; Maron, David J; Bangalore, Sripal

    2014-08-12

    National and international guidelines recommend fasting lipid panel measurement for risk stratification of patients for prevention of cardiovascular events. However, the prognostic value of fasting versus nonfasting low-density lipoprotein cholesterol (LDL-C) is uncertain. Patients enrolled in the National Health and Nutrition Examination Survey III (NHANES-III), a nationally representative cross-sectional survey performed from 1988 to 1994, were stratified on the basis of fasting status (≥8 or <8 hours) and followed for a mean of 14.0 (±0.22) years. Propensity score matching was used to assemble fasting and nonfasting cohorts with similar baseline characteristics. The risk of outcomes as a function of LDL-C and fasting status was assessed with the use of receiver operating characteristic curves and bootstrapping methods. The interaction between fasting status and LDL-C was assessed with Cox proportional hazards modeling. Primary outcome was all-cause mortality. Secondary outcome was cardiovascular mortality. One-to-one matching based on propensity score yielded 4299 pairs of fasting and nonfasting individuals. For the primary outcome, fasting LDL-C yielded prognostic value similar to that for nonfasting LDL-C (C statistic=0.59 [95% confidence interval, 0.57-0.61] versus 0.58 [95% confidence interval, 0.56-0.60]; P=0.73), and LDL-C by fasting status interaction term in the Cox proportional hazards model was not significant (Pinteraction=0.11). Similar results were seen for the secondary outcome (fasting versus nonfasting C statistic=0.62 [95% confidence interval, 0.60-0.66] versus 0.62 [95% confidence interval, 0.60-0.66]; P=0.96; Pinteraction=0.34). Nonfasting LDL-C has prognostic value similar to that of fasting LDL-C. National and international agencies should consider reevaluating the recommendation that patients fast before obtaining a lipid panel. © 2014 American Heart Association, Inc.

  15. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines

    PubMed Central

    Gilkey, Melissa B.; McRee, Annie-Laurie; Magnus, Brooke E.; Reiter, Paul L.; Dempsey, Amanda F.; Brewer, Noel T.

    2016-01-01

    Objective To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents. Methods We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children’s vaccination status for vaccines including measles, mumps, and rubella (MMR), varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents’ mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. Results A substantial minority of parents reported a history of vaccine refusal (15%) or delay (27%). Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54–0.63) as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76–0.86). Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40–1.68), varicella (OR = 1.54, 95% CI, 1.42–1.66), and flu vaccines (OR = 1.32, 95% CI, 1.23–1.42). Conclusions Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children. PMID:27391098

  16. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

    ERIC Educational Resources Information Center

    Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-01-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

  17. Effects of Short-Interval and Long-Interval Swimming Protocols on Performance, Aerobic Adaptations, and Technical Parameters: A Training Study.

    PubMed

    Dalamitros, Athanasios A; Zafeiridis, Andreas S; Toubekis, Argyris G; Tsalis, George A; Pelarigo, Jailton G; Manou, Vasiliki; Kellis, Spiridon

    2016-10-01

    Dalamitros, AA, Zafeiridis, AS, Toubekis, AG, Tsalis, GA, Pelarigo, JG, Manou, V, and Kellis, S. Effects of short-interval and long-interval swimming protocols on performance, aerobic adaptations, and technical parameters: A training study. J Strength Cond Res 30(10): 2871-2879, 2016-This study compared 2-interval swimming training programs of different work interval durations, matched for total distance and exercise intensity, on swimming performance, aerobic adaptations, and technical parameters. Twenty-four former swimmers were equally divided to short-interval training group (INT50, 12-16 × 50 m with 15 seconds rest), long-interval training group (INT100, 6-8 × 100 m with 30 seconds rest), and a control group (CON). The 2 experimental groups followed the specified swimming training program for 8 weeks. Before and after training, swimming performance, technical parameters, and indices of aerobic adaptations were assessed. ΙΝΤ50 and ΙΝΤ100 improved swimming performance in 100 and 400-m tests and the maximal aerobic speed (p ≤ 0.05); the performance in the 50-m swim did not change. Posttraining V[Combining Dot Above]O2max values were higher compared with pretraining values in both training groups (p ≤ 0.05), whereas peak aerobic power output increased only in INT100 (p ≤ 0.05). The 1-minute heart rate and blood lactate recovery values decreased after training in both groups (p < 0.01). Stroke length increased in 100 and 400-m swimming tests after training in both groups (p ≤ 0.05); no changes were observed in stroke rate after training. Comparisons between groups on posttraining mean values, after adjusting for pretraining values, revealed no significant differences between ΙΝΤ50 and ΙΝΤ100 for all variables; however, all measures were improved vs. the respective values in the CON (p < 0.001-0.05). In conclusion, when matched for distance and exercise intensity, the short-interval (50 m) and long-interval (100 m) protocols confer analogous

  18. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  19. The health impacts and economic value of wildland fire episodes in the U.S.: 2008-2012.

    PubMed

    Fann, Neal; Alman, Breanna; Broome, Richard A; Morgan, Geoffrey G; Johnston, Fay H; Pouliot, George; Rappold, Ana G

    2018-01-01

    Wildland fires degrade air quality and adversely affect human health. A growing body of epidemiology literature reports increased rates of emergency departments, hospital admissions and premature deaths from wildfire smoke exposure. Our research aimed to characterize excess mortality and morbidity events, and the economic value of these impacts, from wildland fire smoke exposure in the U.S. over a multi-year period; to date no other burden assessment has done this. We first completed a systematic review of the epidemiologic literature and then performed photochemical air quality modeling for the years 2008 to 2012 in the continental U.S. Finally, we estimated the morbidity, mortality, and economic burden of wildland fires. Our models suggest that areas including northern California, Oregon and Idaho in the West, and Florida, Louisiana and Georgia in the East were most affected by wildland fire events in the form of additional premature deaths and respiratory hospital admissions. We estimated the economic value of these cases due to short term exposures as being between $11 and $20B (2010$) per year, with a net present value of $63B (95% confidence intervals $6-$170); we estimate the value of long-term exposures as being between $76 and $130B (2010$) per year, with a net present value of $450B (95% confidence intervals $42-$1200). The public health burden of wildland fires-in terms of the number and economic value of deaths and illnesses-is considerable. Published by Elsevier B.V.

  20. Confidence level estimation in multi-target classification problems

    NASA Astrophysics Data System (ADS)

    Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia

    2018-04-01

    This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.

  1. Magnetic Resonance Fingerprinting with short relaxation intervals.

    PubMed

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  2. Reference intervals and discrimination values of the Lanthony desaturated D-15 panel test in young to middle-aged Japanese army officials: the Okubo Color Study Report 1.

    PubMed

    Shoji, T; Sakurai, Y; Chihara, E; Nishikawa, S; Omae, K

    2009-06-01

    To better understand the reference values and adequate discrimination values of colour vision function with described quantitative systems for the Lanthony desaturated D-15 panel (D-15DS). A total of 1042 Japanese male officials were interviewed and underwent testing using Ishihara pseudoisochromatic plates, standard pseudoisochromatic plates part 2, and the D-15DS. The Farnsworth-Munsell (F-M) 100-hue test and the criteria of Verriest et al were used as definitive tests. Outcomes of the D-15DS were calculated using Bowman's Colour Confusion Index (CCI). The study design included two criteria. In criterion A, subjects with current or past ocular disease and a best-corrected visual acuity less than 0.7 on a decimal visual acuity chart were excluded. In criterion B, among subjects who satisfied criterion A, those who had a congenital colour sense anomaly were excluded. Overall, the 90th percentile (95th percentile) CCI values for criteria A and B in the worse eye were 1.70 (1.95) and 1.59 (1.73), respectively. In subjects satisfying criterion B, the area under the receiver operating characteristic curve was 0.951 (95% confidence interval, 0.931-0.971). The CCI discrimination values of 1.52 or 1.63 showed 90.3% sensitivity and 90% specificity, or 71.5% sensitivity and 95% specificity, respectively, for discriminating acquired colour vision impairment (ACVI). We provided the 90th and 95th percentiles in a young to middle-aged healthy population. The CCI is in good agreement with the diagnosis of ACVI. Our results could be helpful for using D-15DS for screening purposes.

  3. On computations of variance, covariance and correlation for interval data

    NASA Astrophysics Data System (ADS)

    Kishida, Masako

    2017-02-01

    In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.

  4. Methodology for building confidence measures

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  5. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. A global multicenter study on reference values: 1. Assessment of methods for derivation and comparison of reference intervals.

    PubMed

    Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K

    2017-04-01

    The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. The effect of terrorism on public confidence : an exploratory study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, M. S.; Baldwin, T. E.; Samsa, M. E.

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the metrics it uses to assess the consequences of terrorist attacks. Hence, several factors--including a detailed understanding of the variations in public confidence among individuals, by type of terrorist event, and as a function of time--are critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, andmore » administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data were collected from the groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery bombing, and cyber intrusions on financial institutions that resulted in identity theft and financial losses. Our findings include the following: (a) the subjects can be classified into at least three distinct groups on the basis of their baseline outlook--optimistic, pessimistic, and unaffected; (b) the subjects make discriminations in their interpretations of an event on the basis of the nature of a terrorist attack, the time horizon, and its impact; (c) the recovery of confidence after a terrorist event has an incubation period and typically does not return to its initial level in the long-term; (d) the patterns of recovery of confidence differ between the optimists and the pessimists; and (e) individuals are able to associate a monetary value with a loss or gain in confidence, and the value associated with a loss is greater than the value associated with a gain. These findings illustrate the importance the public places in their confidence in

  8. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

  9. Prognostic value of long noncoding RNA MALAT1 in digestive system malignancies.

    PubMed

    Zhai, Hui; Li, Xiao-Mei; Maimaiti, Ailifeire; Chen, Qing-Jie; Liao, Wu; Lai, Hong-Mei; Liu, Fen; Yang, Yi-Ning

    2015-01-01

    MALAT1, a newly discovered long noncoding RNA (lncRNA), has been reported to be highly expressed in many types of cancers. This meta-analysis summarizes its potential prognostic value in digestive system malignancies. A quantitative meta-analysis was performed through a systematic search in PubMed, Cochrane Library, Web of Science and Chinese National Knowledge Infrastructure (CNKI) for eligible papers on the prognostic impact of MALAT1 in digestive system malignancies from inception to Apr. 25, 2015. Pooled hazard ratios (HRs) with 95% confidence interval (95% CI) were calculated to summarize the effect. Five studies were included in the study, with a total of 527 patients. A significant association was observed between MALAT1 abundance and poor overall survival (OS) of patients with digestive system malignancies, with pooled hazard ratio (HR) of 7.68 (95% confidence interval [CI]: 4.32-13.66, P<0.001). Meta sensitivity analysis suggested the reliability of our findings. No publication bias was observed. MALAT1 abundance may serve as a novel predictive factor for poor prognosis in patients with digestive system malignancies.

  10. [Reference values for lead in blood in urban population in southern Brazil].

    PubMed

    Paoliello, M M; Gutierrez, P R; Turini, C A; Matsuo, T; Mezzaroba, L; Barbosa, D S; Carvalho, S R; Alvarenga, A L; Rezende, M I; Figueiroa, G A; Leite, V G; Gutierrez, A C; Lobo, B C; Cascales, R A

    2001-05-01

    To describe the reference values for lead in blood in an urban population in the city of Londrina, in the state of Paraná, Brazil. The reference population was composed of 520 adult volunteers who were assessed from November 1994 to December 1996. Exclusion criteria were: occupational exposure to lead, exposure through personal habits or practices, smoking more than 10 cigarettes per day, and living near industrial plants or other places that use lead in their production processes. Also excluded were individuals with abnormal clinical or laboratory results or with chronic diseases or cardiovascular disorders. Lead blood levels were determined using air-acetylene flame atomic absorption spectrophotometry. The detectable limit was 1.23 micrograms/dL. After the analyses of lead in blood, the following values were determined: minimum value, first quartile, median, third quartile, and maximum value; geometric mean; 95% confidence interval; experimental interval; and reference value. The reference values for lead in blood ranged from 1.20 micrograms/dL to 13.72 micrograms/dL. The geometric mean was 5.5 micrograms/dL. In general, the values found in this study are lower than those that have been reported for other countries. Additional data should be gathered from Brazilian populations living in more-industrialized areas.

  11. Can 3-dimensional power Doppler indices improve the prenatal diagnosis of a potentially morbidly adherent placenta in patients with placenta previa?

    PubMed

    Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J

    2017-08-01

    Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index

  12. Face distinctiveness and delayed testing: differential effects on performance and confidence.

    PubMed

    Metzger, Mitchell M

    2006-04-01

    The author investigated the effect of delayed testing on participants' memory for distinctive and typical faces. Participants viewed distinctive and typical faces and were then tested for recognition immediately or after a delay of 3, 6, or 12 weeks. Consistent with prior research, analysis of measure of sensitivity (d') showed that participants performed better on distinctive rather than typical faces, and memory performance declined with longer retention intervals between study and testing. Furthermore, the superior performance on distinctive faces had vanished by the 12-week test. Contrary to d' data, however, an analysis of confidence scores indicated that participants were still significantly more confident on trials depicting distinctive faces, even with a 12-week delay between study and recognition testing.

  13. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  14. Interpregnancy interval and risk of autistic disorder.

    PubMed

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals <9 months, 0.25% of the second-born children had autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  15. High-Intensity Interval Training, Appetite, and Reward Value of Food in the Obese.

    PubMed

    Martins, Catia; Aschehoug, Irina; Ludviksen, Marit; Holst, Jens; Finlayson, Graham; Wisloff, Ulrik; Morgan, Linda; King, Neil; Kulseng, Bård

    2017-09-01

    Studies on the effect of chronic interval training on appetite in the obese population are scarce. The aim of this study was to determine the effect of 12 wk of isocaloric programs of moderate-intensity continuous training (MICT), high-intensity interval training (HIIT), or short-duration HIIT on subjective feelings of appetite, appetite-related hormones, and reward value of food in sedentary obese individuals. Forty-six sedentary obese individuals (30 women and 16 men), with a body mass index of 33.3 ± 2.9 kg·m and age of 34.4 ± 8.8 yr, were randomly assigned to one of the three training groups: MICT (n = 14), HIIT (n = 16), or short-duration HIIT (n = 16). Exercise was performed three times per week for 12 wk. Subjective feelings of appetite and plasma levels of acylated ghrelin, polypeptide YY3-36, and glucagon-like peptide 1 were measured before and after a standard breakfast (every 30 min up to 3 h), before and after the exercise intervention. Fat and sweet taste preferences and food reward were measured using the Leeds Food Preference Questionnaire. A significant increase in fasting and postprandial feelings of hunger was observed with the exercise intervention (P = 0.01 and P = 0.048, respectively), but no effect of group and no interaction. No significant effect of exercise intervention, group, or interaction was found on fasting or postprandial subjective feelings of fullness, desire to eat, and prospective food consumption or plasma concentration of acylated ghrelin, polypeptide YY3-36, and glucagon-like peptide 1. No changes in food preference or reward over time, differences between groups, or interactions were found. This study suggests that chronic HIIT has no independent effect on appetite or food reward when compared with an isocaloric program of MICT in obese individuals.

  16. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  17. Parents' obesity-related behavior and confidence to support behavioral change in their obese child: data from the STAR study.

    PubMed

    Arsenault, Lisa N; Xu, Kathleen; Taveras, Elsie M; Hacker, Karen A

    2014-01-01

    Successful childhood obesity interventions frequently focus on behavioral modification and involve parents or family members. Parental confidence in supporting behavior change may be an element of successful family-based prevention efforts. We aimed to determine whether parents' own obesity-related behaviors were related to their confidence in supporting their child's achievement of obesity-related behavioral goals. Cross-sectional analyses of data collected at baseline of a randomized control trial testing a treatment intervention for obese children (n = 787) in primary care settings (n = 14). Five obesity-related behaviors (physical activity, screen time, sugar-sweetened beverage, sleep duration, fast food) were self-reported by parents for themselves and their child. Behaviors were dichotomized on the basis of achievement of behavioral goals. Five confidence questions asked how confident the parent was in helping their child achieve each goal. Logistic regression modeling high confidence was conducted with goal achievement and demographics as independent variables. Parents achieving physical activity or sleep duration goals were significantly more likely to be highly confident in supporting their child's achievement of those goals (physical activity, odds ratio 1.76; 95% confidence interval 1.19-2.60; sleep, odds ratio 1.74; 95% confidence interval 1.09-2.79) independent of sociodemographic variables and child's current behavior. Parental achievements of TV watching and fast food goals were also associated with confidence, but significance was attenuated after child's behavior was included in models. Parents' own obesity-related behaviors are factors that may affect their confidence to support their child's behavior change. Providers seeking to prevent childhood obesity should address parent/family behaviors as part of their obesity prevention strategies. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  18. One-way ANOVA based on interval information

    NASA Astrophysics Data System (ADS)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  19. Event- and interval-based measurement of stuttering: a review.

    PubMed

    Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret

    2015-01-01

    Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be

  20. Reference values for 34 frequently used laboratory tests in 80-year-old men and women.

    PubMed

    Helmersson-Karlqvist, Johanna; Ridefelt, Peter; Lind, Lars; Larsson, Anders

    2016-10-01

    Reference values are usually based on blood samples from healthy individuals in the age range 20-50 years. Most patients seeking health care are older than this reference population. Many reference intervals are age dependent and there is thus a need to have appropriate reference intervals also for elderly individuals. We analyzed a group of frequently used laboratory tests in an 80-year-old population (n=531, 266 females and 265 males). The 2.5th and 97.5th percentiles for these markers were calculated according to the International Federation of Clinical Chemistry guidelines on the statistical treatment of reference values. Reference values are reported for serum alanine transaminase (ALT), albumin, alkaline phosphatase, pancreatic amylase, apolipoprotein A1, apolipoprotein B, apolipoprotein B/apolipoprotein A1 ratio, aspartate aminotransferase (AST), AST/ALT ratio, bilirubin, calcium, calprotectin, cholesterol, HDL-cholesterol, creatinine kinase (CK), creatinine, creatinine estimated GFR, C-reactive protein, cystatin C, cystatin C estimated GFR, gamma-glutamyltransferase (GGT), iron, iron saturation, lactate dehydrogenase (LDH), magnesium, phosphate, transferrin, triglycerides, urate, urea, zinc, hemoglobin, platelet count and white blood cell count. The upper reference limit for creatinine and urea was significantly increased while the lower limit for iron and albumin was decreased in this elderly population in comparison with the population in the Nordic Reference Interval Project (NORIP). Reference values calculated from the whole population and a subpopulation without cardiovascular disease showed strong concordance. Several of the reference interval limits were outside the 90% confidence interval of NORIP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Optimal Measurement Interval for Emergency Department Crowding Estimation Tools.

    PubMed

    Wang, Hao; Ojha, Rohit P; Robinson, Richard D; Jackson, Bradford E; Shaikh, Sajid A; Cowden, Chad D; Shyamanand, Rath; Leuck, JoAnna; Schrader, Chet D; Zenarosa, Nestor R

    2017-11-01

    Emergency department (ED) crowding is a barrier to timely care. Several crowding estimation tools have been developed to facilitate early identification of and intervention for crowding. Nevertheless, the ideal frequency is unclear for measuring ED crowding by using these tools. Short intervals may be resource intensive, whereas long ones may not be suitable for early identification. Therefore, we aim to assess whether outcomes vary by measurement interval for 4 crowding estimation tools. Our eligible population included all patients between July 1, 2015, and June 30, 2016, who were admitted to the JPS Health Network ED, which serves an urban population. We generated 1-, 2-, 3-, and 4-hour ED crowding scores for each patient, using 4 crowding estimation tools (National Emergency Department Overcrowding Scale [NEDOCS], Severely Overcrowded, Overcrowded, and Not Overcrowded Estimation Tool [SONET], Emergency Department Work Index [EDWIN], and ED Occupancy Rate). Our outcomes of interest included ED length of stay (minutes) and left without being seen or eloped within 4 hours. We used accelerated failure time models to estimate interval-specific time ratios and corresponding 95% confidence limits for length of stay, in which the 1-hour interval was the reference. In addition, we used binomial regression with a log link to estimate risk ratios (RRs) and corresponding confidence limit for left without being seen. Our study population comprised 117,442 patients. The time ratios for length of stay were similar across intervals for each crowding estimation tool (time ratio=1.37 to 1.30 for NEDOCS, 1.44 to 1.37 for SONET, 1.32 to 1.27 for EDWIN, and 1.28 to 1.23 for ED Occupancy Rate). The RRs of left without being seen differences were also similar across intervals for each tool (RR=2.92 to 2.56 for NEDOCS, 3.61 to 3.36 for SONET, 2.65 to 2.40 for EDWIN, and 2.44 to 2.14 for ED Occupancy Rate). Our findings suggest limited variation in length of stay or left without being

  2. Interval-valued distributed preference relation and its application to group decision making

    PubMed Central

    Liu, Yin; Xue, Min; Chang, Wenjun; Yang, Shanlin

    2018-01-01

    As an important way to help express the preference relation between alternatives, distributed preference relation (DPR) can represent the preferred, non-preferred, indifferent, and uncertain degrees of one alternative over another simultaneously. DPR, however, is unavailable in some situations where a decision maker cannot provide the precise degrees of one alternative over another due to lack of knowledge, experience, and data. In this paper, to address this issue, we propose interval-valued DPR (IDPR) and present its properties of validity and normalization. Through constructing two optimization models, an IDPR matrix is transformed into a score matrix to facilitate the comparison between any two alternatives. The properties of the score matrix are analyzed. To guarantee the rationality of the comparisons between alternatives derived from the score matrix, the additive consistency of the score matrix is developed. In terms of these, IDPR is applied to model and solve multiple criteria group decision making (MCGDM) problem. Particularly, the relationship between the parameters for the consistency of the score matrix associated with each decision maker and those for the consistency of the score matrix associated with the group of decision makers is analyzed. A manager selection problem is investigated to demonstrate the application of IDPRs to MCGDM problems. PMID:29889871

  3. Establishment of reference intervals for plasma protein electrophoresis in Indo-Pacific green sea turtles, Chelonia mydas.

    PubMed

    Flint, Mark; Matthews, Beren J; Limpus, Colin J; Mills, Paul C

    2015-01-01

    Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65-97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11-50%)], pre-albumin [two of five, 40% (95% CI 5-85%)], albumin [13 of 22, 59% (95% CI 36-79%)], total albumin [13 of 22, 59% (95% CI 36-79%)], α- [10 of 22, 45% (95% CI 24-68%)], β- [two of 10, 20% (95% CI 3-56%)], γ- [one of 10, 10% (95% CI 0.3-45%)] and β-γ-globulin [one of 12, 8% (95% CI 0.2-38%)] and total globulin [five of 22, 23% (8-45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle.

  4. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  6. Prognostic value of long noncoding RNA MALAT1 in digestive system malignancies

    PubMed Central

    Zhai, Hui; Li, Xiao-Mei; Maimaiti, Ailifeire; Chen, Qing-Jie; Liao, Wu; Lai, Hong-Mei; Liu, Fen; Yang, Yi-Ning

    2015-01-01

    Background: MALAT1, a newly discovered long noncoding RNA (lncRNA), has been reported to be highly expressed in many types of cancers. This meta-analysis summarizes its potential prognostic value in digestive system malignancies. Methods: A quantitative meta-analysis was performed through a systematic search in PubMed, Cochrane Library, Web of Science and Chinese National Knowledge Infrastructure (CNKI) for eligible papers on the prognostic impact of MALAT1 in digestive system malignancies from inception to Apr. 25, 2015. Pooled hazard ratios (HRs) with 95% confidence interval (95% CI) were calculated to summarize the effect. Results: Five studies were included in the study, with a total of 527 patients. A significant association was observed between MALAT1 abundance and poor overall survival (OS) of patients with digestive system malignancies, with pooled hazard ratio (HR) of 7.68 (95% confidence interval [CI]: 4.32-13.66, P<0.001). Meta sensitivity analysis suggested the reliability of our findings. No publication bias was observed. Conclusions: MALAT1 abundance may serve as a novel predictive factor for poor prognosis in patients with digestive system malignancies. PMID:26770406

  7. Pediatrics Residents' Confidence and Performance Following a Longitudinal Quality Improvement Curriculum.

    PubMed

    Courtlandt, Cheryl; Noonan, Laura; Koricke, Maureen Walsh; Zeskind, Philip Sanford; Mabus, Sarah; Feld, Leonard

    2016-02-01

    Quality improvement (QI) training is an integral part of residents' education. Understanding the educational value of a QI curriculum facilitates understanding of its impact. The purpose of this study was to evaluate the effects of a longitudinal QI curriculum on pediatrics residents' confidence and competence in the acquisition and application of QI knowledge and skills. Three successive cohorts of pediatrics residents (N = 36) participated in a longitudinal curriculum designed to increase resident confidence in QI knowledge and skills. Key components were a succession of progressive experiential projects, QI coaching, and resident team membership culminating in leadership of the project. Residents completed precurricular and postcurricular surveys and demonstrated QI competence by performance on the pediatric QI assessment scenario. Residents participating in the Center for Advancing Pediatric Excellence QI curriculum showed significant increases in pre-post measures of confidence in QI knowledge and skills. Coaching and team leadership were ranked by resident participants as having the most educational value among curriculum components. A pediatric QI assessment scenario, which correlated with resident-perceived confidence in acquisition of QI skills but not QI knowledge, is a tool available to test pediatrics residents' QI knowledge. A 3-year longitudinal, multimodal, experiential QI curriculum increased pediatrics residents' confidence in QI knowledge and skills, was feasible with faculty support, and was well-accepted by residents.

  8. Responsibility and confidence

    PubMed Central

    Austin, Zubin

    2013-01-01

    Background: Despite the changing role of the pharmacist in patient-centred practice, pharmacists anecdotally reported little confidence in their clinical decision-making skills and do not feel responsible for their patients. Observational findings have suggested these trends within the profession, but there is a paucity of evidence to explain why. We conducted an exploratory study with an objective to identify reasons for the lack of responsibility and/or confidence in various pharmacy practice settings. Methods: Pharmacist interviews were conducted via written response, face-to-face or telephone. Seven questions were asked on the topic of responsibility and confidence as it applies to pharmacy practice and how pharmacists think these themes differ in medicine. Interview transcripts were analyzed and divided by common theme. Quotations to support these themes are presented. Results: Twenty-nine pharmacists were asked to participate, and 18 responded (62% response rate). From these interviews, 6 themes were identified as barriers to confidence and responsibility: hierarchy of the medical system, role definitions, evolution of responsibility, ownership of decisions for confidence building, quality and consequences of mentorship and personality traits upon admission. Discussion: We identified 6 potential barriers to the development of pharmacists’ self-confidence and responsibility. These findings have practical applicability for educational research, future curriculum changes, experiential learning structure and pharmacy practice. Due to bias and the limitations of this form of exploratory research and small sample size, evidence should be interpreted cautiously. Conclusion: Pharmacists feel neither responsible nor confident for their clinical decisions due to social, educational, experiential and personal reasons. Can Pharm J 2013;146:155-161. PMID:23795200

  9. Confidence as a Common Currency between Vision and Audition

    PubMed Central

    de Gardelle, Vincent; Le Corre, François; Mamassian, Pascal

    2016-01-01

    The idea of a common currency underlying our choice behaviour has played an important role in sciences of behaviour, from neurobiology to psychology and economics. However, while it has been mainly investigated in terms of values, with a common scale on which goods would be evaluated and compared, the question of a common scale for subjective probabilities and confidence in particular has received only little empirical investigation so far. The present study extends previous work addressing this question, by showing that confidence can be compared across visual and auditory decisions, with the same precision as for the comparison of two trials within the same task. We discuss the possibility that confidence could serve as a common currency when describing our choices to ourselves and to others. PMID:26808061

  10. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

    NASA Astrophysics Data System (ADS)

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-09-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

  11. Re-evaluation of link between interpregnancy interval and adverse birth outcomes: retrospective cohort study matching two intervals per mother

    PubMed Central

    Pereira, Gavin; Jacoby, Peter; de Klerk, Nicholas; Stanley, Fiona J

    2014-01-01

    Objective To re-evaluate the causal effect of interpregnancy interval on adverse birth outcomes, on the basis that previous studies relying on between mother comparisons may have inadequately adjusted for confounding by maternal risk factors. Design Retrospective cohort study using conditional logistic regression (matching two intervals per mother so each mother acts as her own control) to model the incidence of adverse birth outcomes as a function of interpregnancy interval; additional unconditional logistic regression with adjustment for confounders enabled comparison with the unmatched design of previous studies. Setting Perth, Western Australia, 1980-2010. Participants 40 441 mothers who each delivered three liveborn singleton neonates. Main outcome measures Preterm birth (<37 weeks), small for gestational age birth (<10th centile of birth weight by sex and gestational age), and low birth weight (<2500 g). Results Within mother analysis of interpregnancy intervals indicated a much weaker effect of short intervals on the odds of preterm birth and low birth weight compared with estimates generated using a traditional between mother analysis. The traditional unmatched design estimated an adjusted odds ratio for an interpregnancy interval of 0-5 months (relative to the reference category of 18-23 months) of 1.41 (95% confidence interval 1.31 to 1.51) for preterm birth, 1.26 (1.15 to 1.37) for low birth weight, and 0.98 (0.92 to 1.06) for small for gestational age birth. In comparison, the matched design showed a much weaker effect of short interpregnancy interval on preterm birth (odds ratio 1.07, 0.86 to 1.34) and low birth weight (1.03, 0.79 to 1.34), and the effect for small for gestational age birth remained small (1.08, 0.87 to 1.34). Both the unmatched and matched models estimated a high odds of small for gestational age birth and low birth weight for long interpregnancy intervals (longer than 59 months), but the estimated effect of long interpregnancy

  12. Palbociclib has no clinically relevant effect on the QTc interval in patients with advanced breast cancer.

    PubMed

    Durairaj, Chandrasekar; Ruiz-Garcia, Ana; Gauthier, Eric R; Huang, Xin; Lu, Dongrui R; Hoffman, Justin T; Finn, Richard S; Joy, Anil A; Ettl, Johannes; Rugo, Hope S; Zheng, Jenny; Wilner, Keith D; Wang, Diane D

    2018-03-01

    The aim of this study was to assess the potential effects of palbociclib in combination with letrozole on QTc. PALOMA-2, a phase 3, randomized, double-blind, placebo-controlled trial, compared palbociclib plus letrozole with placebo plus letrozole in postmenopausal women with estrogen receptor-positive, human epidermal growth factor receptor 2-negative advanced breast cancer. The study included a QTc evaluation substudy carried out as a definitive QT interval prolongation assessment for palbociclib. Time-matched triplicate ECGs were performed at 0, 2, 4, 6, and 8 h at baseline (Day 0) and on Cycle 1 Day 14. Additional ECGs were collected from all patients for safety monitoring. The QT interval was corrected for heart rate using Fridericia's correction (QTcF), Bazett's correction (QTcB), and a study-specific correction factor (QTcS). In total, 666 patients were randomized 2 : 1 to palbociclib plus letrozole or placebo plus letrozole. Of these, 125 patients were enrolled in the QTc evaluation substudy. No patients in the palbociclib plus letrozole arm of the substudy (N=77) had a maximum postbaseline QTcS or QTcF value of ≥ 480 ms, or a maximum increase from clock time-matched baseline for QTcS or QTcF values of ≥ 60 ms. The upper bounds of the one-sided 95% confidence interval for the mean change from time-matched baseline for QTcS, QTcF, and QTcB at all time points and at steady-state Cmax following repeated administration of 125 mg palbociclib were less than 10 ms. Palbociclib, when administered with letrozole at the recommended therapeutic dosing regimen, did not prolong the QT interval to a clinically relevant extent.

  13. Effects of High-Intensity Interval Training on Aerobic Capacity in Cardiac Patients: A Systematic Review with Meta-Analysis

    PubMed Central

    Xie, Bin; Yan, Xianfeng

    2017-01-01

    Purpose. The aim of this study was to compare the effects of high-intensity interval training (INTERVAL) and moderate-intensity continuous training (CONTINUOUS) on aerobic capacity in cardiac patients. Methods. A meta-analysis identified by searching the PubMed, Cochrane Library, EMBASE, and Web of Science databases from inception through December 2016 compared the effects of INTERVAL and CONTINUOUS among cardiac patients. Results. Twenty-one studies involving 736 participants with cardiac diseases were included. Compared with CONTINUOUS, INTERVAL was associated with greater improvement in peak VO2 (mean difference 1.76 mL/kg/min, 95% confidence interval 1.06 to 2.46 mL/kg/min, p < 0.001) and VO2 at AT (mean difference 0.90 mL/kg/min, 95% confidence interval 0.0 to 1.72 mL/kg/min, p = 0.03). No significant difference between the INTERVAL and CONTINUOUS groups was observed in terms of peak heart rate, peak minute ventilation, VE/VCO2 slope and respiratory exchange ratio, body mass, systolic or diastolic blood pressure, triglyceride or low- or high-density lipoprotein cholesterol level, flow-mediated dilation, or left ventricular ejection fraction. Conclusions. This study showed that INTERVAL improves aerobic capacity more effectively than does CONTINUOUS in cardiac patients. Further studies with larger samples are needed to confirm our observations. PMID:28386556

  14. Effect Sizes and their Intervals: The Two-Level Repeated Measures Case

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2005-01-01

    Probability coverage for eight different confidence intervals (CIs) of measures of effect size (ES) in a two-level repeated measures design was investigated. The CIs and measures of ES differed with regard to whether they used least squares or robust estimates of central tendency and variability, whether the end critical points of the interval…

  15. We will be champions: Leaders' confidence in 'us' inspires team members' team confidence and performance.

    PubMed

    Fransen, K; Steffens, N K; Haslam, S A; Vanbeselaere, N; Vande Broek, G; Boen, F

    2016-12-01

    The present research examines the impact of leaders' confidence in their team on the team confidence and performance of their teammates. In an experiment involving newly assembled soccer teams, we manipulated the team confidence expressed by the team leader (high vs neutral vs low) and assessed team members' responses and performance as they unfolded during a competition (i.e., in a first baseline session and a second test session). Our findings pointed to team confidence contagion such that when the leader had expressed high (rather than neutral or low) team confidence, team members perceived their team to be more efficacious and were more confident in the team's ability to win. Moreover, leaders' team confidence affected individual and team performance such that teams led by a highly confident leader performed better than those led by a less confident leader. Finally, the results supported a hypothesized mediational model in showing that the effect of leaders' confidence on team members' team confidence and performance was mediated by the leader's perceived identity leadership and members' team identification. In conclusion, the findings of this experiment suggest that leaders' team confidence can enhance members' team confidence and performance by fostering members' identification with the team. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Establishment of reference intervals for plasma protein electrophoresis in Indo-Pacific green sea turtles, Chelonia mydas

    PubMed Central

    Flint, Mark; Matthews, Beren J.; Limpus, Colin J.; Mills, Paul C.

    2015-01-01

    Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65–97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11–50%)], pre-albumin [two of five, 40% (95% CI 5–85%)], albumin [13 of 22, 59% (95% CI 36–79%)], total albumin [13 of 22, 59% (95% CI 36–79%)], α- [10 of 22, 45% (95% CI 24–68%)], β- [two of 10, 20% (95% CI 3–56%)], γ- [one of 10, 10% (95% CI 0.3–45%)] and β–γ-globulin [one of 12, 8% (95% CI 0.2–38%)] and total globulin [five of 22, 23% (8–45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle. PMID:27293722

  17. An appraisal of statistical procedures used in derivation of reference intervals.

    PubMed

    Ichihara, Kiyoshi; Boyd, James C

    2010-11-01

    When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.

  18. Raising Confident Kids

    MedlinePlus

    ... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body ... Videos for Educators Search English Español Raising Confident Kids KidsHealth / For Parents / Raising Confident Kids What's in ...

  19. Method and system for assigning a confidence metric for automated determination of optic disc location

    DOEpatents

    Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN

    2012-07-10

    A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.

  20. Flood control project selection using an interval type-2 entropy weight with interval type-2 fuzzy TOPSIS

    NASA Astrophysics Data System (ADS)

    Zamri, Nurnadiah; Abdullah, Lazim

    2014-06-01

    Flood control project is a complex issue which takes economic, social, environment and technical attributes into account. Selection of the best flood control project requires the consideration of conflicting quantitative and qualitative evaluation criteria. When decision-makers' judgment are under uncertainty, it is relatively difficult for them to provide exact numerical values. The interval type-2 fuzzy set (IT2FS) is a strong tool which can deal with the uncertainty case of subjective, incomplete, and vague information. Besides, it helps to solve for some situations where the information about criteria weights for alternatives is completely unknown. Therefore, this paper is adopted the information interval type-2 entropy concept into the weighting process of interval type-2 fuzzy TOPSIS. This entropy weight is believed can effectively balance the influence of uncertainty factors in evaluating attribute. Then, a modified ranking value is proposed in line with the interval type-2 entropy weight. Quantitative and qualitative factors that normally linked with flood control project are considered for ranking. Data in form of interval type-2 linguistic variables were collected from three authorised personnel of three Malaysian Government agencies. Study is considered for the whole of Malaysia. From the analysis, it shows that diversion scheme yielded the highest closeness coefficient at 0.4807. A ranking can be drawn using the magnitude of closeness coefficient. It was indicated that the diversion scheme recorded the first rank among five causes.

  1. Prognostic value of changes in galectin-3 levels over time in patients with heart failure: data from CORONA and COACH.

    PubMed

    van der Velde, A Rogier; Gullestad, Lars; Ueland, Thor; Aukrust, Pål; Guo, Yu; Adourian, Aram; Muntendam, Pieter; van Veldhuisen, Dirk J; de Boer, Rudolf A

    2013-03-01

    In several cross-sectional analyses, circulating baseline levels of galectin-3, a protein involved in myocardial fibrosis and remodeling, have been associated with increased risk for morbidity and mortality in patients with heart failure (HF). The importance and clinical use of repeated measurements of galectin-3 have not yet been reported. Plasma galectin-3 was measured at baseline and at 3 months in patients enrolled in the Controlled Rosuvastatin Multinational Trial in Heart Failure (CORONA) trial (n=1329), and at baseline and at 6 months in patients enrolled in the Coordinating Study Evaluating Outcomes of Advising and Counseling Failure (COACH) trial (n=324). Patient results were analyzed by categorical and percentage changes in galectin-3 level. A threshold value of 17.8 ng/mL or 15% change from baseline was used to categorize patients. Increasing galectin-3 levels over time, from a low to high galectin-3 category, were associated with significantly more HF hospitalization and mortality compared with stable or decreasing galectin-3 levels (hazard ratio in CORONA, 1.60; 95% confidence interval, 1.13-2.25; P=0.007; hazard ratio in COACH, 2.38; 95% confidence interval, 1.02-5.55; P=0.046). In addition, patients whose galectin-3 increased by >15% between measurements had a 50% higher relative hazard of adverse event than those whose galectin-3 stayed within ±15% of the baseline value, independent of age, sex, diabetes mellitus, left ventricular ejection fraction, renal function, medication (β-blocker, angiotensin converting enzyme inhibitor, and angiotensin receptor blocker), and N-terminal probrain natriuretic peptide (hazard ratio in CORONA, 1.50; 95% confidence interval, 1.17-1.92; P=0.001). The impact of changing galectin-3 levels on other secondary end points was comparable. In 2 large cohorts of patients with chronic and acute decompensated HF, repeated measurements of galectin-3 level provided important and significant prognostic value in identifying

  2. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  3. A Test by Any Other Name: P Values, Bayes Factors, and Statistical Inference.

    PubMed

    Stern, Hal S

    2016-01-01

    Procedures used for statistical inference are receiving increased scrutiny as the scientific community studies the factors associated with insuring reproducible research. This note addresses recent negative attention directed at p values, the relationship of confidence intervals and tests, and the role of Bayesian inference and Bayes factors, with an eye toward better understanding these different strategies for statistical inference. We argue that researchers and data analysts too often resort to binary decisions (e.g., whether to reject or accept the null hypothesis) in settings where this may not be required.

  4. Myocardial perfusion magnetic resonance imaging using sliding-window conjugate-gradient highly constrained back-projection reconstruction for detection of coronary artery disease.

    PubMed

    Ma, Heng; Yang, Jun; Liu, Jing; Ge, Lan; An, Jing; Tang, Qing; Li, Han; Zhang, Yu; Chen, David; Wang, Yong; Liu, Jiabin; Liang, Zhigang; Lin, Kai; Jin, Lixin; Bi, Xiaoming; Li, Kuncheng; Li, Debiao

    2012-04-15

    Myocardial perfusion magnetic resonance imaging (MRI) with sliding-window conjugate-gradient highly constrained back-projection reconstruction (SW-CG-HYPR) allows whole left ventricular coverage, improved temporal and spatial resolution and signal/noise ratio, and reduced cardiac motion-related image artifacts. The accuracy of this technique for detecting coronary artery disease (CAD) has not been determined in a large number of patients. We prospectively evaluated the diagnostic performance of myocardial perfusion MRI with SW-CG-HYPR in patients with suspected CAD. A total of 50 consecutive patients who were scheduled for coronary angiography with suspected CAD underwent myocardial perfusion MRI with SW-CG-HYPR at 3.0 T. The perfusion defects were interpreted qualitatively by 2 blinded observers and were correlated with x-ray angiographic stenoses ≥50%. The prevalence of CAD was 56%. In the per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SW-CG-HYPR was 96% (95% confidence interval 82% to 100%), 82% (95% confidence interval 60% to 95%), 87% (95% confidence interval 70% to 96%), 95% (95% confidence interval 74% to100%), and 90% (95% confidence interval 82% to 98%), respectively. In the per-vessel analysis, the corresponding values were 98% (95% confidence interval 91% to 100%), 89% (95% confidence interval 80% to 94%), 86% (95% confidence interval 76% to 93%), 99% (95% confidence interval 93% to 100%), and 93% (95% confidence interval 89% to 97%), respectively. In conclusion, myocardial perfusion MRI using SW-CG-HYPR allows whole left ventricular coverage and high resolution and has high diagnostic accuracy in patients with suspected CAD. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Dynamic visual noise reduces confidence in short-term memory for visual information.

    PubMed

    Kemps, Eva; Andrade, Jackie

    2012-05-01

    Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

  6. The diagnostic value of narrow-band imaging for early and invasive lung cancer: a meta-analysis.

    PubMed

    Zhu, Juanjuan; Li, Wei; Zhou, Jihong; Chen, Yuqing; Zhao, Chenling; Zhang, Ting; Peng, Wenjia; Wang, Xiaojing

    2017-07-01

    This study aimed to compare the ability of narrow-band imaging to detect early and invasive lung cancer with that of conventional pathological analysis and white-light bronchoscopy. We searched the PubMed, EMBASE, Sinomed, and China National Knowledge Infrastructure databases for relevant studies. Meta-disc software was used to perform data analysis, meta-regression analysis, sensitivity analysis, and heterogeneity testing, and STATA software was used to determine if publication bias was present, as well as to calculate the relative risks for the sensitivity and specificity of narrow-band imaging vs those of white-light bronchoscopy for the detection of early and invasive lung cancer. A random-effects model was used to assess the diagnostic efficacy of the above modalities in cases in which a high degree of between-study heterogeneity was noted with respect to their diagnostic efficacies. The database search identified six studies including 578 patients. The pooled sensitivity and specificity of narrow-band imaging were 86% (95% confidence interval: 83-88%) and 81% (95% confidence interval: 77-84%), respectively, and the pooled sensitivity and specificity of white-light bronchoscopy were 70% (95% confidence interval: 66-74%) and 66% (95% confidence interval: 62-70%), respectively. The pooled relative risks for the sensitivity and specificity of narrow-band imaging vs the sensitivity and specificity of white-light bronchoscopy for the detection of early and invasive lung cancer were 1.33 (95% confidence interval: 1.07-1.67) and 1.09 (95% confidence interval: 0.84-1.42), respectively, and sensitivity analysis showed that narrow-band imaging exhibited good diagnostic efficacy with respect to detecting early and invasive lung cancer and that the results of the study were stable. Narrow-band imaging was superior to white light bronchoscopy with respect to detecting early and invasive lung cancer; however, the specificities of the two modalities did not differ

  7. Hematologic and serum chemistry reference intervals for free-ranging lions (Panthera leo).

    PubMed

    Maas, Miriam; Keet, Dewald F; Nielen, Mirjam

    2013-08-01

    Hematologic and serum chemistry values are used by veterinarians and wildlife researchers to assess health status and to identify abnormally high or low levels of a particular blood parameter in a target species. For free-ranging lions (Panthera leo) information about these values is scarce. In this study 7 hematologic and 11 serum biochemistry values were evaluated from 485 lions from the Kruger National Park, South Africa. Significant differences between sexes and sub-adult (≤ 36 months) and adult (>36 months) lions were found for most of the blood parameters and separate reference intervals were made for those values. The obtained reference intervals include the means of the various blood parameter values measured in captive lions, except for alkaline phosphatase in the subadult group. These reference intervals can be utilized for free-ranging lions, and may likely also be used as reference intervals for captive lions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Is Teacher Value Added a Matter of Scale? The Practical Consequences of Treating an Ordinal Scale as Interval for Estimation of Teacher Effects

    ERIC Educational Resources Information Center

    Soland, James

    2017-01-01

    Research shows that assuming a test scale is equal-interval can be problematic, especially when the assessment is being used to achieve a policy aim like evaluating growth over time. However, little research considers whether teacher value added is sensitive to the underlying test scale, and in particular whether treating an ordinal scale as…

  9. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  10. Five-Year Risk of Interval-Invasive Second Breast Cancer

    PubMed Central

    Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.

    2015-01-01

    Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721

  11. Confidence to cook vegetables and the buying habits of Australian households.

    PubMed

    Winkler, Elisabeth; Turrell, Gavin

    2009-10-01

    Cooking skills are emphasized in nutrition promotion but their distribution among population subgroups and relationship to dietary behavior is researched by few population-based studies. This study examined the relationships between confidence to cook, sociodemographic characteristics, and household vegetable purchasing. This cross-sectional study of 426 randomly selected households in Brisbane, Australia, used a validated questionnaire to assess household vegetable purchasing habits and the confidence to cook of the person who most often prepares food for these households. The mutually adjusted odds ratios (ORs) of lacking confidence to cook were assessed across a range of demographic subgroups using multiple logistic regression models. Similarly, mutually adjusted mean vegetable purchasing scores were calculated using multiple linear regression for different population groups and for respondents with varying confidence levels. Lacking confidence to cook using a variety of techniques was more common among respondents with less education (OR 3.30; 95% confidence interval [CI] 1.01 to 10.75) and was less common among respondents who lived with minors (OR 0.22; 95% CI 0.09 to 0.53) and other adults (OR 0.43; 95% CI 0.24 to 0.78). Lack of confidence to prepare vegetables was associated with being male (OR 2.25; 95% CI 1.24 to 4.08), low education (OR 6.60; 95% CI 2.08 to 20.91), lower household income (OR 2.98; 95% CI 1.02 to 8.72) and living with other adults (OR 0.53; 95% CI 0.29 to 0.98). Households bought a greater variety of vegetables on a regular basis when the main chef was confident to prepare them (difference: 18.60; 95% CI 14.66 to 22.54), older (difference: 8.69; 95% CI 4.92 to 12.47), lived with at least one other adult (difference: 5.47; 95% CI 2.82 to 8.12) or at least one minor (difference: 2.86; 95% CI 0.17 to 5.55). Cooking skills may contribute to socioeconomic dietary differences, and may be a useful strategy for promoting fruit and vegetable

  12. IBM system/360 assembly language interval arithmetic software

    NASA Technical Reports Server (NTRS)

    Phillips, E. J.

    1972-01-01

    Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.

  13. Age-dependent biochemical quantities: an approach for calculating reference intervals.

    PubMed

    Bjerner, J

    2007-01-01

    A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.

  14. Effects of Training and Feedback on Accuracy of Predicting Rectosigmoid Neoplastic Lesions and Selection of Surveillance Intervals by Endoscopists Performing Optical Diagnosis of Diminutive Polyps.

    PubMed

    Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien

    2018-05-01

    Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group

  15. Change in Breast Cancer Screening Intervals Since the 2009 USPSTF Guideline.

    PubMed

    Wernli, Karen J; Arao, Robert F; Hubbard, Rebecca A; Sprague, Brian L; Alford-Teaster, Jennifer; Haas, Jennifer S; Henderson, Louise; Hill, Deidre; Lee, Christoph I; Tosteson, Anna N A; Onega, Tracy

    2017-08-01

    In 2009, the U.S. Preventive Services Task Force (USPSTF) recommended biennial mammography for women aged 50-74 years and shared decision-making for women aged 40-49 years for breast cancer screening. We evaluated changes in mammography screening interval after the 2009 recommendations. We conducted a prospective cohort study of women aged 40-74 years who received 821,052 screening mammograms between 2006 and 2012 using data from the Breast Cancer Surveillance Consortium. We compared changes in screening intervals and stratified intervals based on whether the mammogram at the end of the interval occurred before or after the 2009 recommendation. Differences in mean interval length by woman-level characteristics were compared using linear regression. The mean interval (in months) minimally decreased after the 2009 USPSTF recommendations. Among women aged 40-49 years, the mean interval decreased from 17.2 months to 17.1 months (difference -0.16%, 95% confidence interval [CI] -0.30 to -0.01). Similar small reductions were seen for most age groups. The largest change in interval length in the post-USPSTF period was declines among women with a first-degree family history of breast cancer (difference -0.68%, 95% CI -0.82 to -0.54) or a 5-year breast cancer risk ≥2.5% (difference -0.58%, 95% CI -0.73 to -0.44). The 2009 USPSTF recommendation did not lengthen the average mammography interval among women routinely participating in mammography screening. Future studies should evaluate whether breast cancer screening intervals lengthen toward biennial intervals following new national 2016 breast cancer screening recommendations, particularly among women less than 50 years of age.

  16. Interval-valued intuitionistic fuzzy matrix games based on Archimedean t-conorm and t-norm

    NASA Astrophysics Data System (ADS)

    Xia, Meimei

    2018-04-01

    Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal-dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method.

  17. Mass media and heterogeneous bounds of confidence in continuous opinion dynamics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Buendía, G. M.

    2015-02-01

    This work focuses on the effects of an external mass media on continuous opinion dynamics with heterogeneous bounds of confidence. We modified the original Deffuant et al. and Hegselmann and Krause models to incorporate both, an external mass media and a heterogeneous distribution of confidence levels. We analysed two cases, one where only two bounds of confidence are taken into account, and other where each individual of the system has her/his own characteristic level of confidence. We found that, in the absence of mass media, diversity of bounds of confidence can improve the capacity of the systems to reach consensus. We show that the persuasion capacity of the external message is optimal for intermediate levels of heterogeneity. Our simulations also show the existence, for certain parameter values, of a counter-intuitive effect in which the persuasion capacity of the mass media decreases if the mass media intensity is too large. We discuss similarities and differences between the two heterogeneous versions of these continuous opinion dynamic models under the influence of mass media.

  18. Prognostic Value of Residual Urine Volume, GFR by 24-hour Urine Collection, and eGFR in Patients Receiving Dialysis.

    PubMed

    Lee, Mi Jung; Park, Jung Tak; Park, Kyoung Sook; Kwon, Young Eun; Oh, Hyung Jung; Yoo, Tae-Hyun; Kim, Yong-Lim; Kim, Yon Su; Yang, Chul Woo; Kim, Nam-Ho; Kang, Shin-Wook; Han, Seung Hyeok

    2017-03-07

    Residual kidney function can be assessed by simply measuring urine volume, calculating GFR using 24-hour urine collection, or estimating GFR using the proposed equation (eGFR). We aimed to investigate the relative prognostic value of these residual kidney function parameters in patients on dialysis. Using the database from a nationwide prospective cohort study, we compared differential implications of the residual kidney function indices in 1946 patients on dialysis at 36 dialysis centers in Korea between August 1, 2008 and December 31, 2014. Residual GFR calculated using 24-hour urine collection was determined by an average of renal urea and creatinine clearance on the basis of 24-hour urine collection. eGFR-urea, creatinine and eGFR β 2 -microglobulin were calculated from the equations using serum urea and creatinine and β 2 -microglobulin, respectively. The primary outcome was all-cause death. During a mean follow-up of 42 months, 385 (19.8%) patients died. In multivariable Cox analyses, residual urine volume (hazard ratio, 0.96 per 0.1-L/d higher volume; 95% confidence interval, 0.94 to 0.98) and GFR calculated using 24-hour urine collection (hazard ratio, 0.98; 95% confidence interval, 0.95 to 0.99) were independently associated with all-cause mortality. In 1640 patients who had eGFR β 2 -microglobulin data, eGFR β 2 -microglobulin (hazard ratio, 0.98; 95% confidence interval, 0.96 to 0.99) was also significantly associated with all-cause mortality as well as residual urine volume (hazard ratio, 0.96 per 0.1-L/d higher volume; 95% confidence interval, 0.94 to 0.98) and GFR calculated using 24-hour urine collection (hazard ratio, 0.97; 95% confidence interval, 0.95 to 0.99). When each residual kidney function index was added to the base model, only urine volume improved the predictability for all-cause mortality (net reclassification index =0.11, P =0.01; integrated discrimination improvement =0.01, P =0.01). Higher residual urine volume was significantly

  19. Predictive value of stroke discharge diagnoses in the Danish National Patient Register.

    PubMed

    Lühdorf, Pernille; Overvad, Kim; Schmidt, Erik B; Johnsen, Søren P; Bach, Flemming W

    2017-08-01

    To determine the positive predictive values for stroke discharge diagnoses, including subarachnoidal haemorrhage, intracerebral haemorrhage and cerebral infarction in the Danish National Patient Register. Participants in the Danish cohort study Diet, Cancer and Health with a stroke discharge diagnosis in the National Patient Register between 1993 and 2009 were identified and their medical records were retrieved for validation of the diagnoses. A total of 3326 records of possible cases of stroke were reviewed. The overall positive predictive value for stroke was 69.3% (95% confidence interval (CI) 67.8-70.9%). The predictive values differed according to hospital characteristics, with the highest predictive value of 87.8% (95% CI 85.5-90.1%) found in departments of neurology and the lowest predictive value of 43.0% (95% CI 37.6-48.5%) found in outpatient clinics. The overall stroke diagnosis in the Danish National Patient Register had a limited predictive value. We therefore recommend the critical use of non-validated register data for research on stroke. The possibility of optimising the predictive values based on more advanced algorithms should be considered.

  20. The effects of apremilast on the QTc interval in healthy male volunteers: a formal, thorough QT study

    PubMed Central

    Palmisano, Maria; Wu, Anfan; Assaf, Mahmoud; Liu, Liangang; Park, C. Hyung; Savant, Ishani; Liu, Yong; Zhou, Simon

    2016-01-01

    Objective: This study was conducted to evaluate the effect of apremilast and its major metabolites on the placebo-corrected change-from-baseline QTc interval of an electrocardiogram (ECG). Materials and methods: Healthy male subjects received each of 4 treatments in a randomized, crossover manner. In the 2 active treatment periods, apremilast 30 mg (therapeutic exposure) or 50 mg (supratherapeutic exposure) was administered twice daily for 9 doses. A placebo control was used to ensure double-blind treatment of apremilast, and an open-label, single dose of moxifloxacin 400 mg was administered as a positive control. ECGs were measured using 24-hour digital Holter monitoring. Results: The two-sided 98% confidence intervals (CIs) for ΔΔQTcI of moxifloxacin completely exceeded 5 ms 2 – 4 hours postdose. For both apremilast dose studies, the least-squares mean ΔΔQTcI was < 1 ms at all time points, and the upper limit of two-sided 90% CIs was < 10 ms. There were no QT/QTc values > 480 ms or a change from baseline > 60 ms. Exploratory evaluation of pharmacokinetic/pharmacodynamic data showed no trend between the changes in QT/QTc interval and the concentration of apremilast or its major metabolites M12 and M14. Conclusions: Apremilast did not prolong the QT interval and appears to be safe and well tolerated up to doses of 50 mg twice daily. PMID:27285466

  1. STORC safety initiative: a multicentre survey on preparedness & confidence in obstetric emergencies.

    PubMed

    Guise, Jeanne-Marie; Segel, Sally Y; Larison, Kristine; M Jump, Sarah; Constable, Marion; Li, Hong; Osterweil, Patricia; Dieter Zimmer

    2010-12-01

    Patient safety is a national and international priority. The purpose of this study was to understand clinicians' perceptions of teamwork during obstetric emergencies in clinical practice, to examine factors associated with confidence in responding to obstetric emergencies and to evaluate perceptions about the value of team training to improve preparedness. An anonymous survey was administered to all clinical staff members who respond to obstetric emergencies in seven Oregon hospitals from June 2006 to August 2006. 614 clinical staff (74.5%) responded. While over 90% felt confident that the appropriate clinical staff would respond to emergencies, more than half reported that other clinical staff members were confused about their role during emergencies. Over 84% were confident that emergency drills or simulation-based team training would improve performance. Clinical staff who respond to obstetric emergencies in their practice reported feeling confident that the qualified personnel would respond to an emergency; however, they were less confident that the responders would perform well as a team. They reported that simulation and team training may improve their preparedness and confidence in responding to emergencies.

  2. Increasing accuracy in the interval analysis by the improved format of interval extension based on the first order Taylor series

    NASA Astrophysics Data System (ADS)

    Li, Yi; Xu, Yan Long

    2018-05-01

    When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.

  3. Acculturation and Linguistic Factors on International Students' Self-Esteem and Language Confidence

    ERIC Educational Resources Information Center

    Lopez, Iris Y.; Bui, Ngoc H.

    2014-01-01

    Acculturation and linguistic factors were examined as predictors of self-esteem and language confidence among 91 international college students. The majority of participants were Asian (64.8%), female (59.3%), and graduate students (76.9%). Assimilative (adopting host cultural values) and integrative (blending both host and home cultural values)…

  4. The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.

    PubMed

    Wixted, John T; Wells, Gary L

    2017-05-01

    The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).

  5. The idiosyncratic nature of confidence

    PubMed Central

    Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

    2017-01-01

    Confidence is the ‘feeling of knowing’ that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence. PMID:29152591

  6. [Effects of group psychological counseling on self-confidence and social adaptation of burn patients].

    PubMed

    Dang, Rui; Wang, Yishen; Li, Na; He, Ting; Shi, Mengna; Liang, Yanyan; Zhu, Chan; Zhou, Yongbo; Qi, Zongshi; Hu, Dahai

    2014-12-01

    relationship, health condition, and general condition were (87 ± 3), (47.8 ± 3.6), (49 ± 3), and (239 ± 10) points in trial group, which were significantly higher than those in control group [(79 ± 4), (38.3 ± 5.6), (46 ± 4), and (231 ± 9) points, with t values respectively -8.635, -8.125, -3.352, -3.609, P values below 0.01]. After treatment, the scores of physical function, psychological function, social relationship, health condition, and general condition in trial group were significantly higher than those before treatment (with t values from -33.282 to -19.515, P values below 0.05). The scores of physical function, psychological function, health condition, and general condition in control group after treatment were significantly higher than those before treatment (with t values from -27.137 to -17.790, P values below 0.05). Group psychological counseling combined with ordinary rehabilitation training give rise to significant effects on self-confidence level and social adaptation for burn patients.

  7. Development of the probability of return of spontaneous circulation in intervals without chest compressions during out-of-hospital cardiac arrest: an observational study.

    PubMed

    Gundersen, Kenneth; Kvaløy, Jan Terje; Kramer-Johansen, Jo; Steen, Petter Andreas; Eftestøl, Trygve

    2009-02-06

    One of the factors that limits survival from out-of-hospital cardiac arrest is the interruption of chest compressions. During ventricular fibrillation and tachycardia the electrocardiogram reflects the probability of return of spontaneous circulation associated with defibrillation. We have used this in the current study to quantify in detail the effects of interrupting chest compressions. From an electrocardiogram database we identified all intervals without chest compressions that followed an interval with compressions, and where the patients had ventricular fibrillation or tachycardia. By calculating the mean-slope (a predictor of the return of spontaneous circulation) of the electrocardiogram for each 2-second window, and using a linear mixed-effects statistical model, we quantified the decline of mean-slope with time. Further, a mapping from mean-slope to probability of return of spontaneous circulation was obtained from a second dataset and using this we were able to estimate the expected development of the probability of return of spontaneous circulation for cases at different levels. From 911 intervals without chest compressions, 5138 analysis windows were identified. The results show that cases with the probability of return of spontaneous circulation values 0.35, 0.1 and 0.05, 3 seconds into an interval in the mean will have probability of return of spontaneous circulation values 0.26 (0.24-0.29), 0.077 (0.070-0.085) and 0.040(0.036-0.045), respectively, 27 seconds into the interval (95% confidence intervals in parenthesis). During pre-shock pauses in chest compressions mean probability of return of spontaneous circulation decreases in a steady manner for cases at all initial levels. Regardless of initial level there is a relative decrease in the probability of return of spontaneous circulation of about 23% from 3 to 27 seconds into such a pause.

  8. Calibration with confidence: a principled method for panel assessment.

    PubMed

    MacKay, R S; Kenna, R; Low, R J; Parker, S

    2017-02-01

    Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, 'true' values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options.

  9. Calibration with confidence: a principled method for panel assessment

    PubMed Central

    MacKay, R. S.; Low, R. J.; Parker, S.

    2017-01-01

    Frequently, a set of objects has to be evaluated by a panel of assessors, but not every object is assessed by every assessor. A problem facing such panels is how to take into account different standards among panel members and varying levels of confidence in their scores. Here, a mathematically based algorithm is developed to calibrate the scores of such assessors, addressing both of these issues. The algorithm is based on the connectivity of the graph of assessors and objects evaluated, incorporating declared confidences as weights on its edges. If the graph is sufficiently well connected, relative standards can be inferred by comparing how assessors rate objects they assess in common, weighted by the levels of confidence of each assessment. By removing these biases, ‘true’ values are inferred for all the objects. Reliability estimates for the resulting values are obtained. The algorithm is tested in two case studies: one by computer simulation and another based on realistic evaluation data. The process is compared to the simple averaging procedure in widespread use, and to Fisher's additive incomplete block analysis. It is anticipated that the algorithm will prove useful in a wide variety of situations such as evaluation of the quality of research submitted to national assessment exercises; appraisal of grant proposals submitted to funding panels; ranking of job applicants; and judgement of performances on degree courses wherein candidates can choose from lists of options. PMID:28386432

  10. The value of FATS expression in predicting sensitivity to radiotherapy in breast cancer.

    PubMed

    Zhang, Jun; Wu, Nan; Zhang, Tiemei; Sun, Tao; Su, Yi; Zhao, Jing; Mu, Kun; Jin, Zhao; Gao, Ming; Liu, Juntian; Gu, Lin

    2017-06-13

    The fragile-site associated tumor suppressor (FATS) is a newly identified tumor suppressor involved in radiation-induced tumorigenesis. The purpose of this study was to characterize FATS expression in breast cancers about radiotherapy benefit, patient characteristics, and prognosis. The expression of FATS mRNA was silent or downregulated in 95.2% of breast cancer samples compared with paired normal controls (P < .0001). Negative status of FATS was correlated with higher nuclear grade (P = .01) and shorter disease-free survival (DFS) of breast cancer (P = .036). In a multivariate analysis, FATS expression showed favorable prognostic value for DFS (odds ratio, 0.532; 95% confidence interval, 0.299 to 0.947; (P = .032). Furthermore, improved survival time was seen in FATS-positive patients receiving radiotherapy (P = .006). The results of multivariate analysis revealed independent prognostic value of FATS expression in predicting longer DFS (odds ratio, 0.377; 95% confidence interval, 0.176 to 0.809; P = 0.012) for patients receiving adjuvant radiotherapy. In support of this, reduction of FATS expression in breast cancer cell lines, FATS positive group significantly sensitized than Knock-down of FATS group. Tissue samples from 156 breast cancer patients and 42 controls in tumor bank were studied. FATS gene expression was evaluated using quantitative reverse transcription polymerase chain reaction (qRT-PCR). FATS function was examined in breast cancer cell lines using siRNA knock-downs and colony forming assays after irradiation. FATS status is a biomarker in breast cancer to identify individuals likely to benefit from radiotherapy.

  11. The value of FATS expression in predicting sensitivity to radiotherapy in breast cancer

    PubMed Central

    Zhang, Tiemei; Sun, Tao; Su, Yi; Zhao, Jing; Mu, Kun; Jin, Zhao; Gao, Ming; Liu, Juntian; Gu, Lin

    2017-01-01

    Purpose The fragile-site associated tumor suppressor (FATS) is a newly identified tumor suppressor involved in radiation-induced tumorigenesis. The purpose of this study was to characterize FATS expression in breast cancers about radiotherapy benefit, patient characteristics, and prognosis. Results The expression of FATS mRNA was silent or downregulated in 95.2% of breast cancer samples compared with paired normal controls (P < .0001). Negative status of FATS was correlated with higher nuclear grade (P = .01) and shorter disease-free survival (DFS) of breast cancer (P = .036). In a multivariate analysis, FATS expression showed favorable prognostic value for DFS (odds ratio, 0.532; 95% confidence interval, 0.299 to 0.947; (P = .032). Furthermore, improved survival time was seen in FATS-positive patients receiving radiotherapy (P = .006). The results of multivariate analysis revealed independent prognostic value of FATS expression in predicting longer DFS (odds ratio, 0.377; 95% confidence interval, 0.176 to 0.809; P = 0.012) for patients receiving adjuvant radiotherapy. In support of this, reduction of FATS expression in breast cancer cell lines, FATS positive group significantly sensitized than Knock-down of FATS group. Materials and Methods Tissue samples from 156 breast cancer patients and 42 controls in tumor bank were studied. FATS gene expression was evaluated using quantitative reverse transcription polymerase chain reaction (qRT-PCR). FATS function was examined in breast cancer cell lines using siRNA knock-downs and colony forming assays after irradiation. Conclusions FATS status is a biomarker in breast cancer to identify individuals likely to benefit from radiotherapy. PMID:28402275

  12. Indoor radon regulation using tabulated values of temporal radon variation.

    PubMed

    Tsapalov, Andrey; Kovler, Konstantin

    2018-03-01

    Mass measurements of indoor radon concentrations have been conducted for about 30 years. In most of the countries, a national reference/action/limit level is adopted, limiting the annual average indoor radon (AAIR) concentration. However, until now, there is no single and generally accepted international protocol for determining the AAIR with a known confidence interval, based on measurements of different durations. Obviously, as the duration of measurements increases, the uncertainty of the AAIR estimation decreases. The lack of the information about the confidence interval of the determined AAIR level does not allow correct comparison with the radon reference level. This greatly complicates development of an effective indoor radon measurement protocol and strategy. The paper proposes a general principle of indoor radon regulation, based on the simple criteria widely used in metrology, and introduces a new parameter - coefficient of temporal radon variation K V (t) that depends on the measurement duration and determines the uncertainty of the AAIR. An algorithm for determining K V (t) based on the results of annual continuous radon monitoring in experimental rooms is proposed. Included are indoor radon activity concentrations and equilibrium equivalent concentration (EEC) of radon progeny. The monitoring was conducted in 10 selected experimental rooms located in 7 buildings, mainly in the Moscow region (Russia), from 2006 to 2013. The experimental and tabulated values of K V (t) and also the values of the coefficient of temporal EEC variation depending on the mode and duration of the measurements were obtained. The recommendations to improve the efficiency and reliability of indoor radon regulation are given. The importance of taking into account the geological factors is discussed. The representativity of the results of the study is estimated and the approach for their verification is proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Predictive value of N-terminal pro-brain natriuretic peptide in severe sepsis and septic shock.

    PubMed

    Varpula, Marjut; Pulkki, Kari; Karlsson, Sari; Ruokonen, Esko; Pettilä, Ville

    2007-05-01

    The aim of this study was to evaluate the predictive value of N-terminal pro-brain natriuretic peptide (NT-proBNP) on mortality in a large, unselected patient population with severe sepsis and septic shock. Prospective observational cohort study about incidence and prognosis of sepsis in 24 intensive care units in Finland (the FINNSEPSIS study). A total of 254 patients with severe sepsis or septic shock. After informed consent, the blood tests for NT-proBNP analyses were drawn on the day of admission and 72 hrs thereafter. Patients' demographic data were collected, and intensive care unit and hospital mortality and basic hemodynamic and laboratory data were recorded daily. NT-proBNP levels at admission were significantly higher in hospital nonsurvivors (median, 7908 pg/mL) compared with survivors (median, 3479 pg/mL; p = .002), and the difference remained after 72 hrs (p = .002). The receiver operating characteristic curves of admission and 72-hr NT-proBNP levels for hospital mortality resulted in area under the curve values of 0.631 (95% confidence interval, 0.549-0.712; p = .002) and 0.648 (95% confidence interval, 0.554-0.741; p = .002), respectively. In logistic regression analyses, NT-proBNP values at 72 hrs after inclusion and Simplified Acute Physiology Score for the first 24 hrs were independent predictors of hospital mortality. Pulmonary artery occlusion pressure (p < .001), plasma creatinine clearance (p = .001), platelet count (p = .03), and positive blood culture (p = .04) had an independent effect on first-day NT-proBNP values, whereas after 72 hrs, only plasma creatinine clearance (p < .001) was significant in linear regression analysis. NT-proBNP values are frequently increased in severe sepsis and septic shock. Values are significantly higher in nonsurvivors than survivors. NT-proBNP on day 3 in the intensive care unit is an independent prognostic marker of mortality in severe sepsis.

  14. Bone turnover marker reference intervals in young females.

    PubMed

    Callegari, Emma T; Gorelik, Alexandra; Garland, Suzanne M; Chiang, Cherie Y; Wark, John D

    2017-07-01

    Background The use of bone turnover markers in clinical practice and research in younger people is limited by the lack of normative data and understanding of common causes of variation in bone turnover marker values in this demographic. To appropriately interpret bone turnover markers, robust reference intervals specific to age, development and sex are necessary. This study aimed to determine reference intervals of bone turnover markers in females aged 16-25 years participating in the Safe-D study. Methods Participants were recruited through social networking site Facebook and were asked to complete an extensive, online questionnaire and attend a site visit. Participants were tested for serum carboxy-terminal cross-linking telopeptide of type 1 collagen and total procollagen type 1 N-propeptide using the Roche Elecsys automated analyser. Reference intervals were determined using the 2.5th to 97.5th percentiles of normalized bone turnover marker values. Results Of 406 participants, 149 were excluded due to medical conditions or medication use (except hormonal contraception) which may affect bone metabolism. In the remaining 257 participants, the reference interval was 230-1000 ng/L for serum carboxy-terminal cross-linking telopeptide of type 1 collagen and 27-131  µg/L for procollagen type 1 N-propeptide. Both marker concentrations were inversely correlated with age and oral contraceptive pill use. Therefore, intervals specific to these variables were calculated. Conclusions We defined robust reference intervals for cross-linking telopeptide of type 1 collagen and procollagen type 1 N-propeptide in young females grouped by age and contraceptive pill use. We examined bone turnover markers' relationship with several lifestyle, clinical and demographic factors. Our normative intervals should aid interpretation of bone turnover markers in young females particularly in those aged 16 to 19 years where reference intervals are currently provisional.

  15. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    PubMed

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  16. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China

    PubMed Central

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li’an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-01-01

    Abstract A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box–Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China. PMID:26945390

  17. Interval forecasting of cyber-attacks on industrial control systems

    NASA Astrophysics Data System (ADS)

    Ivanyo, Y. M.; Krakovsky, Y. M.; Luzgin, A. N.

    2018-03-01

    At present, cyber-security issues of industrial control systems occupy one of the key niches in a state system of planning and management Functional disruption of these systems via cyber-attacks may lead to emergencies related to loss of life, environmental disasters, major financial and economic damage, or disrupted activities of cities and settlements. There is then an urgent need to develop protection methods against cyber-attacks. This paper studied the results of cyber-attack interval forecasting with a pre-set intensity level of cyber-attacks. Interval forecasting is the forecasting of one interval from two predetermined ones in which a future value of the indicator will be obtained. For this, probability estimates of these events were used. For interval forecasting, a probabilistic neural network with a dynamic updating value of the smoothing parameter was used. A dividing bound of these intervals was determined by a calculation method based on statistical characteristics of the indicator. The number of cyber-attacks per hour that were received through a honeypot from March to September 2013 for the group ‘zeppo-norcal’ was selected as the indicator.

  18. Probability and Confidence Trade-space (PACT) Evaluation: Accounting for Uncertainty in Sparing Assessments

    NASA Technical Reports Server (NTRS)

    Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael

    2012-01-01

    There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.

  19. Symbol interval optimization for molecular communication with drift.

    PubMed

    Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung

    2014-09-01

    In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.

  20. High-precision optical measurement of the 2S hyperfine interval in atomic hydrogen.

    PubMed

    Kolachevsky, N; Fischer, M; Karshenboim, S G; Hänsch, T W

    2004-01-23

    We have applied an optical method to the measurement of the 2S hyperfine interval in atomic hydrogen. The interval has been measured by means of two-photon spectroscopy of the 1S-2S transition on a hydrogen atomic beam shielded from external magnetic fields. The measured value of the 2S hyperfine interval is equal to 177 556 860(16) Hz and represents the most precise measurement of this interval to date. The theoretical evaluation of the specific combination of 1S and 2S hyperfine intervals D21 is in fair agreement (within 1.4 sigma) with the value for D21 deduced from our measurement.

  1. Fast and confident: postdicting eyewitness identification accuracy in a field study.

    PubMed

    Sauerland, Melanie; Sporer, Siegfried L

    2009-03-01

    The combined postdictive value of postdecision confidence, decision time, and Remember-Know-Familiar (RKF) judgments as markers of identification accuracy was evaluated with 10 targets and 720 participants. In a pedestrian area, passers-by were asked for directions. Identifications were made from target-absent or target-present lineups. Fast (optimum time boundary at 6 seconds) and confident (optimum confidence boundary at 90%) witnesses were highly accurate, slow and nonconfident witnesses highly inaccurate. Although this combination of postdictors was clearly superior to using either postdictor by itself these combinations refer only to a subsample of choosers. Know answers were associated with higher identification performance than Familiar answers, with no difference between Remember and Know answers. The results of participants' post hoc decision time estimates paralleled those with measured decision times. To explore decision strategies of nonchoosers, three subgroups were formed according to their reasons given for rejecting the lineup. Nonchoosers indicating that the target had simply been absent made faster and more confident decisions than nonchoosers stating lack of confidence or lack of memory. There were no significant differences with regard to identification performance across nonchooser groups. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  2. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

    PubMed

    Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

    2015-01-01

    This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

  3. Five-year risk of interval-invasive second breast cancer.

    PubMed

    Lee, Janie M; Buist, Diana S M; Houssami, Nehmat; Dowling, Emily C; Halpern, Elkan F; Gazelle, G Scott; Lehman, Constance D; Henderson, Louise M; Hubbard, Rebecca A

    2015-07-01

    Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. © The Author 2015. Published by Oxford University Press. All rights reserved

  4. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    ERIC Educational Resources Information Center

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  5. Evaluating the Impact of Guessing and Its Interactions with Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    ERIC Educational Resources Information Center

    Paek, Insu

    2016-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…

  6. Neurophysiology of perceived confidence.

    PubMed

    Graziano, Martin; Parra, Lucas C; Sigman, Mariano

    2010-01-01

    In a partial report paradigm, subjects observe during a brief presentation a cluttered field and after some time - typically ranging from 100 ms to a second - are asked to report a subset of the presented elements. A vast buffer of information is transiently available to be broadcasted which, if not retrieved in time, fades rapidly without reaching consciousness. An interesting feature of this experiment is that objective performance and subjective confidence is decoupled. This converts this paradigm in an ideal vehicle to understand the brain dynamics of the construction of confidence. Here we report a high-density EEG experiment in which we infer elements of the EEG response which are indicative of subjective confidence. We find that an early response during encoding partially correlates with perceived confidence. However, the bulk of the weight of subjective confidence is determined during a late, N400-like waveform, during the retrieval stage. This shows that we can find markers of access to internal, subjective states, that are uncoupled from objective response and stimulus properties of the task, and we propose that this can be used with decoding methods of EEG to infer subjective mental states.

  7. Extraction and LOD control of colored interval volumes

    NASA Astrophysics Data System (ADS)

    Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi

    2005-03-01

    Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.

  8. Exploring separable components of institutional confidence.

    PubMed

    Hamm, Joseph A; PytlikZillig, Lisa M; Tomkins, Alan J; Herian, Mitchel N; Bornstein, Brian H; Neeley, Elizabeth M

    2011-01-01

    Despite its contemporary and theoretical importance in numerous social scientific disciplines, institutional confidence research is limited by a lack of consensus regarding the distinctions and relationships among related constructs (e.g., trust, confidence, legitimacy, distrust, etc.). This study examined four confidence-related constructs that have been used in studies of trust/confidence in the courts: dispositional trust, trust in institutions, obligation to obey the law, and cynicism. First, the separability of the four constructs was examined by exploratory factor analyses. Relationships among the constructs were also assessed. Next, multiple regression analyses were used to explore each construct's independent contribution to confidence in the courts. Finally, a second study replicated the first study and also examined the stability of the institutional confidence constructs over time. Results supported the hypothesized separability of, and correlations among, the four confidence-related constructs. The extent to which the constructs independently explained the observed variance in confidence in the courts differed as a function of the specific operationalization of confidence in the courts and the individual predictor measures. Implications for measuring institutional confidence and future research directions are discussed. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Venetoclax does not prolong the QT interval in patients with hematological malignancies: an exposure-response analysis.

    PubMed

    Freise, Kevin J; Dunbar, Martin; Jones, Aksana K; Hoffman, David; Enschede, Sari L Heitner; Wong, Shekman; Salem, Ahmed Hamed

    2016-10-01

    Venetoclax (ABT-199/GDC-0199) is a selective first-in-class B cell lymphoma-2 inhibitor being developed for the treatment of hematological malignancies. The aim of this study was to determine the potential of venetoclax to prolong the corrected QT (QTc) interval and to evaluate the relationship between systemic venetoclax concentration and QTc interval. The study population included 176 male and female patients with relapsed or refractory chronic lymphocytic leukemia/small lymphocytic lymphoma (n = 105) or non-Hodgkin's lymphoma (n = 71) enrolled in a phase 1 safety, pharmacokinetic, and efficacy study. Electrocardiograms were collected in triplicate at time-matched points (2, 4, 6, and 8 h) prior to the first venetoclax administration and after repeated venetoclax administration to achieve steady state conditions. Venetoclax doses ranged from 100 to 1200 mg daily. Plasma venetoclax samples were collected after steady state electrocardiogram measurements. The mean and upper bound of the 2-sided 90 % confidence interval (CI) QTc change from baseline were <5 and <10 ms, respectively, at all time points and doses (<400, 400, and >400 mg). Three subjects had single QTc values >500 ms and/or ΔQTc > 60 ms. The effect of venetoclax concentration on both ΔQTc and QTc was not statistically significant (P > 0.05). At the mean maximum concentrations achieved with therapeutic (400 mg) and supra-therapeutic (1200 mg) venetoclax doses, the estimated drug effects on QTc were 0.137 (90 % CI [-1.01 to 1.28]) and 0.263 (90 % CI [-1.92 to 2.45]) ms, respectively. Venetoclax does not prolong QTc interval even at supra-therapeutic doses, and there is no relationship between venetoclax concentrations and QTc interval.

  10. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    PubMed

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  11. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    PubMed Central

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment, we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence. PMID:22171810

  12. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, Kevin

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive featuremore » of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.« less

  13. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    PubMed

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  14. Parent's confidence as a caregiver.

    PubMed

    Raines, Deborah A; Brustad, Judith

    2012-06-01

    The purpose of this study was to describe the parent's self-reported confidence as a caregiver. The specific research questions were as follows: • What is the parent's perceived level of confidence when performing infant caregiving activities in the neonatal intensive care unit (NICU)? • What is the parent's projected level of confidence about performing infant caregiving activities on the first day at home? Participants were parents of infants with an anticipated discharge date within 5 days. Inclusion criteria were as follows: parent at least 18 years of age, infant's discharge destination is home with the parent, parent will have primary responsibility for the infant after discharge, and the infant's length of stay in the NICU was a minimum of 10 days. Descriptive, survey research. Participants perceived themselves to be confident in all but 2 caregiving activities when caring for their infants in the NICU, but parents projected a change in their level of confidence in their ability to independently complete infant care activities at home. When comparing the self-reported level of confidence in the NICU and the projected level of confidence at home, the levels of confidence decreased for 5 items, increased for 8 items, and remained unchanged for 2 items. All of the items with a decrease in score were the items with the lowest score when performed in the NICU. All of these low-scoring items are caregiving activities that are unique to the post-NICU status of the infant. Interestingly, the parent's projected level of confidence increased for the 8 items focused on handling and interacting with the infant. The findings of this research provide evidence that nurses may need to rethink when parents become active participants in their infant's medical-based caregiving activities.

  15. Increasing Product Confidence-Shifting Paradigms.

    PubMed

    Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew

    2015-01-01

    Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers

  16. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  17. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants

    PubMed Central

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

    2017-01-01

    Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework

  18. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  19. Reference values for pulse oximetry at high altitude

    PubMed Central

    Gamponia, M; Babaali, H; Yugar, F; Gilman, R

    1998-01-01

    OBJECTIVE—To determine reference values for oxygen saturation (SaO2) in healthy children younger than 5 years living at high altitude.
DESIGN—One hundred and sixty eight children were examined for SaO2 at 4018 m during well child visits. Physiological state was also noted during the examination.
RESULTS—The mean SaO2 was 87.3% (95% confidence intervals (CI) 86.7%, 87.9%) with a median value of 87.7%. A significant difference was observed in SaO2 between children younger than 1 year compared with older children, although the difference was no longer demonstrable when sleeping children were excluded.
CONCLUSIONS—This study has provided a reference range of SaO2 values for healthy children under 5 years old so that pulse oximetry may be used as an adjunct in diagnosing acute respiratory infections. Younger children were also shown to have a lower mean SaO2 than older children living at high altitude, which suggests physiological adaptation to high altitude over time. In addition, sleep had a lowering effect on SaO2, although the clinical importance of this remains undetermined.

 PMID:9659095

  20. [Estimation of the atrioventricular time interval by pulse Doppler in the normal fetal heart].

    PubMed

    Hamela-Olkowska, Anita; Dangel, Joanna

    2009-08-01

    To assess normative values of the fetal atrioventricular (AV) time interval by pulse-wave Doppler methods on 5-chamber view. Fetal echocardiography exams were performed using Acuson Sequoia 512 in 140 singleton fetuses at 18 to 40 weeks of gestation with sinus rhythm and normal cardiac and extracardiac anatomy. Pulsed Doppler derived AV intervals were measured from left ventricular inflow/outflow view using transabdominal convex 3.5-6 MHz probe. The values of AV time interval ranged from 100 to 150 ms (mean 123 +/- 11.2). The AV interval was negatively correlated with the heart rhythm (p<0.001). Fetal heart rate decreased as gestation progressed (p<0.001). Thus, the AV intervals increased with the age of gestation (p=0.007). However, in the same subgroup of the fetal heart rate there was no relation between AV intervals and gestational age. Therefore, the AV intervals showed only the heart rate dependence. The 95th percentiles of AV intervals according to FHR ranged from 135 to 148 ms. 1. The AV interval duration was negatively correlated with the heart rhythm. 2. Measurement of AV time interval is easy to perform and has a good reproducibility. It may be used for the fetal heart block screening in anti-Ro and anti-La positive pregnancies. 3. Normative values established in the study may help obstetricians in assessing fetal abnormalities of the AV conduction.

  1. The QT Interval and Risk of Incident Atrial Fibrillation

    PubMed Central

    Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.

    2013-01-01

    BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693

  2. Global longitudinal strain corrected by RR interval is a superior predictor of all-cause mortality in patients with systolic heart failure and atrial fibrillation.

    PubMed

    Modin, Daniel; Sengeløv, Morten; Jørgensen, Peter Godsk; Bruun, Niels Eske; Olsen, Flemming Javier; Dons, Maria; Fritz Hansen, Thomas; Jensen, Jan Skov; Biering-Sørensen, Tor

    2018-04-01

    Quantification of systolic function in patients with atrial fibrillation (AF) is challenging. A novel approach, based on RR interval correction, to counteract the varying heart cycle lengths in AF has recently been proposed. Whether this method is superior in patients with systolic heart failure (HFrEF) with AF remains unknown. This study investigates the prognostic value of RR interval-corrected peak global longitudinal strain {GLSc = GLS/[RR^(1/2)]} in relation to all-cause mortality in HFrEF patients displaying AF during echocardiographic examination. Echocardiograms from 151 patients with HFrEF and AF during examination were analysed offline. Peak global longitudinal strain (GLS) was averaged from 18 myocardial segments obtained from three apical views. GLS was indexed with the square root of the RR interval {GLSc = GLS/[RR^(1/2)]}. Endpoint was all-cause mortality. During a median follow-up of 2.7 years, 40 patients (26.5%) died. Neither uncorrected GLS (P = 0.056) nor left ventricular ejection fraction (P = 0.053) was significantly associated with all-cause mortality. After RR^(1/2) indexation, GLSc became a significant predictor of all-cause mortality (hazard ratio 1.16, 95% confidence interval 1.02-1.22, P = 0.014, per %/s^(1/2) decrease). GLSc remained an independent predictor of mortality after multivariable adjustment (age, sex, mean heart rate, mean arterial blood pressure, left atrial volume index, and E/e') (hazard ratio 1.17, 95% confidence interval 1.05-1.31, P = 0.005 per %/s^(1/2) decrease). Decreasing {GLSc = GLS/[RR^(1/2)]}, but not uncorrected GLS nor left ventricular ejection fraction, was significantly associated with increased risk of all-cause mortality in HFrEF patients with AF and remained an independent predictor after multivariable adjustment. © 2017 The Authors. ESC Heart Failure published by John Wiley & Sons Ltd on behalf of the European Society of Cardiology.

  3. Finding Intervals of Abrupt Change in Earth Science Data

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Shekhar, S.; Liess, S.

    2011-12-01

    In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset reasonably fast. In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar change pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset fast. In this work, we analyze earth science data using a novel, automated data mining approach to identify spatial/temporal intervals of persistent, abrupt change. We first

  4. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    ERIC Educational Resources Information Center

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2004-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…

  5. Fixed-interval performance and self-control in children.

    PubMed Central

    Darcheville, J C; Rivière, V; Wearden, J H

    1992-01-01

    Operant responses of 16 children (mean age 6 years and 1 month) were reinforced according to different fixed-interval schedules (with interreinforcer intervals of 20, 30, or 40 s) in which the reinforcers were either 20-s or 40-s presentations of a cartoon. In another procedure, they received training on a self-control paradigm in which both reinforcer delay (0.5 s or 40 s) and reinforcer duration (20 s or 40 s of cartoons) varied, and subjects were offered a choice between various combinations of delay and duration. Individual differences in behavior under the self-control procedure were precisely mirrored by individual differences under the fixed-interval schedule. Children who chose the smaller immediate reinforcer on the self-control procedure (impulsive) produced short postreinforcement pauses and high response rates in the fixed-interval conditions, and both measures changed little with changes in fixed-interval value. Conversely, children who chose the larger delayed reinforcer in the self-control condition (the self-controlled subjects) exhibited lower response rates and long postreinforcement pauses, which changed systematically with changes in the interval, in their fixed-interval performances. PMID:1573372

  6. The Relationship between Confidence and Self-Concept--Towards a Model of Response Confidence

    ERIC Educational Resources Information Center

    Kroner, Stephan; Biermann, Antje

    2007-01-01

    According to Stankov [Stankov, L. (2000). Complexity, metacognition and fluid intelligence. Intelligence, 28, 121-143.] response confidence in cognitive tests reflects a trait on the boundary of personality and abilities. However, several studies failed in relating confidence scores to other known traits, including self-concept. A model of…

  7. Investigating the Genetic Architecture of the PR Interval Using Clinical Phenotypes.

    PubMed

    Mosley, Jonathan D; Shoemaker, M Benjamin; Wells, Quinn S; Darbar, Dawood; Shaffer, Christian M; Edwards, Todd L; Bastarache, Lisa; McCarty, Catherine A; Thompson, Will; Chute, Christopher G; Jarvik, Gail P; Crosslin, David R; Larson, Eric B; Kullo, Iftikhar J; Pacheco, Jennifer A; Peissig, Peggy L; Brilliant, Murray H; Linneman, James G; Witte, John S; Denny, Josh C; Roden, Dan M

    2017-04-01

    One potential use for the PR interval is as a biomarker of disease risk. We hypothesized that quantifying the shared genetic architectures of the PR interval and a set of clinical phenotypes would identify genetic mechanisms contributing to PR variability and identify diseases associated with a genetic predictor of PR variability. We used ECG measurements from the ARIC study (Atherosclerosis Risk in Communities; n=6731 subjects) and 63 genetically modulated diseases from the eMERGE network (Electronic Medical Records and Genomics; n=12 978). We measured pairwise genetic correlations (rG) between PR phenotypes (PR interval, PR segment, P-wave duration) and each of the 63 phenotypes. The PR segment was genetically correlated with atrial fibrillation (rG=-0.88; P =0.0009). An analysis of metabolic phenotypes in ARIC also showed that the P wave was genetically correlated with waist circumference (rG=0.47; P =0.02). A genetically predicted PR interval phenotype based on 645 714 single-nucleotide polymorphisms was associated with atrial fibrillation (odds ratio=0.89 per SD change; 95% confidence interval, 0.83-0.95; P =0.0006). The differing pattern of associations among the PR phenotypes is consistent with analyses that show that the genetic correlation between the P wave and PR segment was not significantly different from 0 (rG=-0.03 [0.16]). The genetic architecture of the PR interval comprises modulators of atrial fibrillation risk and obesity. © 2017 American Heart Association, Inc.

  8. Serum prolactin revisited: parametric reference intervals and cross platform evaluation of polyethylene glycol precipitation-based methods for discrimination between hyperprolactinemia and macroprolactinemia.

    PubMed

    Overgaard, Martin; Pedersen, Susanne Møller

    2017-10-26

    Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.

  9. Confidence limit variation for a single IMRT system following the TG119 protocol.

    PubMed

    Gordon, J D; Krafft, S P; Jang, S; Smith-Raymond, L; Stevie, M Y; Hamilton, R J

    2011-03-01

    To evaluate the robustness of TG119-based quality assurance metrics for an IMRT system. Four planners constructed treatment plans for the five IMRT test cases described in TG119. All plans were delivered to a 30 cm x 30 cm x 15 cm solid water phantom in one treatment session in order to minimize session-dependent variation from phantom setup, film quality, machine performance, etc. Composite measurements utilized film and an ionization chamber. Per-field measurements were collected using a diode array device at an effective depth of 5 cm. All data collected were analyzed using the TG119 specifications to determine the confidence limit values for each planner separately and then compared. The mean variance of ion chamber measurements for each planner was within 1.7% of the planned dose. The resulting confidence limits were 3.13%, 1.98%, 3.65%, and 4.39%. Confidence limit values determined by composite film analysis were 8.06%, 13.4%, 9.30%, and 16.5%. Confidence limits from per-field measurements were 1.55%, 0.00%, 0.00%, and 2.89%. For a single IMRT system, the accuracy assessment provided by TG119-based quality assurance metrics showed significant variations in the confidence limits between planners across all composite and per-field evaluations. This observed variation is likely due to the different levels of modulation between each planner's set of plans. Performing the TG119 evaluation using plans produced by a single planner may not provide an adequate estimation of IMRT system accuracy.

  10. Happiness Scale Interval Study. Methodological Considerations

    ERIC Educational Resources Information Center

    Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.

    2011-01-01

    The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous…

  11. MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION

    EPA Science Inventory

    Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...

  12. Fast and Confident: Postdicting Eyewitness Identification Accuracy in a Field Study

    ERIC Educational Resources Information Center

    Sauerland, Melanie; Sporer, Siegfried L.

    2009-01-01

    The combined postdictive value of postdecision confidence, decision time, and Remember-Know-Familiar (RKF) judgments as markers of identification accuracy was evaluated with 10 targets and 720 participants. In a pedestrian area, passers-by were asked for directions. Identifications were made from target-absent or target-present lineups. Fast…

  13. Iron Metabolism Genes, Low-Level Lead Exposure, and QT Interval

    PubMed Central

    Park, Sung Kyun; Hu, Howard; Wright, Robert O.; Schwartz, Joel; Cheng, Yawen; Sparrow, David; Vokonas, Pantel S.; Weisskopf, Marc G.

    2009-01-01

    Background Cumulative exposure to lead has been shown to be associated with depression of electrocardiographic conduction, such as QT interval (time from start of the Q wave to end of the T wave). Because iron can enhance the oxidative effects of lead, we examined whether polymorphisms in iron metabolism genes [hemochromatosis (HFE), transferrin (TF) C2, and heme oxygenase-1 (HMOX-1)] increase susceptibility to the effects of lead on QT interval in 613 community-dwelling older men. Methods We used standard 12-lead electrocardiograms, K-shell X-ray fluorescence, and graphite furnace atomic absorption spectrometry to measure QT interval, bone lead, and blood lead levels, respectively. Results A one-interquartile-range increase in tibia lead level (13 μg/g) was associated with a 11.35-msec [95% confidence interval (CI), 4.05–18.65 msec] and a 6.81-msec (95% CI, 1.67–11.95 msec) increase in the heart-rate–corrected QT interval among persons carrying long HMOX-1 alleles and at least one copy of an HFE variant, respectively, but had no effect in persons with short and middle HMOX-1 alleles and the wild-type HFE genotype. The lengthening of the heart-rate–corrected QT interval with higher tibia lead and blood lead became more pronounced as the total number (0 vs. 1 vs. ≥2) of gene variants increased (tibia, p-trend = 0.01; blood, p-trend = 0.04). This synergy seems to be driven by a joint effect between HFE variant and HMOX-1 L alleles. Conclusion We found evidence that gene variants related to iron metabolism increase the impacts of low-level lead exposure on the prolonged QT interval. This is the first such report, so these results should be interpreted cautiously and need to be independently verified. PMID:19165391

  14. Risk of Interval Cancer in Fecal Immunochemical Test Screening Significantly Higher During the Summer Months: Results from the National Cancer Screening Program in Korea.

    PubMed

    Cha, Jae Myung; Suh, Mina; Kwak, Min Seob; Sung, Na Young; Choi, Kui Son; Park, Boyoung; Jun, Jae Kwan; Hwang, Sang-Hyun; Lee, Do-Hoon; Kim, Byung Chang; Lee, You Kyoung; Han, Dong Soo

    2018-04-01

    This study aimed to evaluate the impact of seasonal variations in climate on the performance of the fecal immunochemical test (FIT) in screening for colorectal cancer in the National Cancer Screening Program in Korea. Data were extracted from the National Cancer Screening Program databases for participants who underwent FIT between 2009 and 2010. We compared positivity rates, cancer detection rates, interval cancer rates, positive predictive value, sensitivity, and specificity for FIT during the spring, summer, fall, and winter seasons in Korea. In total, 4,788,104 FIT results were analyzed. FIT positivity rate was lowest during the summer months. In the summer, the positive predictive value of FIT was about 1.1 times (adjusted odds ratio (aOR) 1.08, 95% confidence interval (CI) 1.00-1.16) higher in the overall FIT group and about 1.3 times (aOR 1.29, 95% CI 1.10-1.50) higher in the quantitative FIT group, compared to those in the other seasons. Cancer detection rates, however, were similar regardless of season. Interval cancer risk was significantly higher in the summer for both the overall FIT group (aOR 1.16, 95% CI 1.07-1.27) and the quantitative FIT group (aOR 1.31, 95% CI 1.12-1.52). In addition, interval cancers in the rectum and distal colon were more frequently detected in the summer and autumn than in the winter. The positivity rate of FIT was lower in the summer, and the performance of the FIT screening program was influenced by seasonal variations in Korea. These results suggest that more efforts to reduce interval cancer during the summer are needed in population-based screening programs using FIT, particularly in countries with high ambient temperatures.

  15. Confidence in the safety of standard childhood vaccinations among New Zealand health professionals.

    PubMed

    Lee, Carol; Duck, Isabelle; Sibley, Chris G

    2018-05-04

    To investigate the level of confidence in the safety of standard childhood vaccinations among health professionals in New Zealand. Data from the 2013/14 New Zealand Attitudes and Values Study (NZAVS) was used to investigate the level of agreement that "it is safe to vaccinate children following the standard New Zealand immunisation schedule" among different classes of health professionals (N=1,032). Most health professionals showed higher levels of vaccine confidence, with 96.7% of those describing their occupation as GP or simply 'doctor' (GPs/doctor) and 90.7% of pharmacists expressing strong vaccine confidence. However, there were important disparities between some other classes of health professionals, with only 65.1% of midwives and 13.6% of practitioners of alternative medicine expressing high vaccine confidence. As health professionals are a highly trusted source of vaccine information, communicating the consensus of belief among GPs/doctors that vaccines are safe may help provide reassurance for parents who ask about vaccine safety. However, the lower level of vaccine confidence among midwives is a matter of concern that may have negative influence on parental perceptions of vaccinations.

  16. Collaborative derivation of reference intervals for major clinical laboratory tests in Japan.

    PubMed

    Ichihara, Kiyoshi; Yomamoto, Yoshikazu; Hotta, Taeko; Hosogaya, Shigemi; Miyachi, Hayato; Itoh, Yoshihisa; Ishibashi, Midori; Kang, Dongchon

    2016-05-01

    Three multicentre studies of reference intervals were conducted recently in Japan. The Committee on Common Reference Intervals of the Japan Society of Clinical Chemistry sought to establish common reference intervals for 40 laboratory tests which were measured in common in the three studies and regarded as well harmonized in Japan. The study protocols were comparable with recruitment mostly from hospital workers with body mass index ≤28 and no medications. Age and sex distributions were made equal to obtain a final data size of 6345 individuals. Between-subgroup differences were expressed as the SD ratio (between-subgroup SD divided by SD representing the reference interval). Between-study differences were all within acceptable levels, and thus the three datasets were merged. By adopting SD ratio ≥0.50 as a guide, sex-specific reference intervals were necessary for 12 assays. Age-specific reference intervals for females partitioned at age 45 were required for five analytes. The reference intervals derived by the parametric method resulted in appreciable narrowing of the ranges by applying the latent abnormal values exclusion method in 10 items which were closely associated with prevalent disorders among healthy individuals. Sex- and age-related profiles of reference values, derived from individuals with no abnormal results in major tests, showed peculiar patterns specific to each analyte. Common reference intervals for nationwide use were developed for 40 major tests, based on three multicentre studies by advanced statistical methods. Sex- and age-related profiles of reference values are of great relevance not only for interpreting test results, but for applying clinical decision limits specified in various clinical guidelines. © The Author(s) 2015.

  17. Complex reference values for endocrine and special chemistry biomarkers across pediatric, adult, and geriatric ages: establishment of robust pediatric and adult reference intervals on the basis of the Canadian Health Measures Survey.

    PubMed

    Adeli, Khosrow; Higgins, Victoria; Nieuwesteeg, Michelle; Raizman, Joshua E; Chen, Yunqi; Wong, Suzy L; Blais, David

    2015-08-01

    Defining laboratory biomarker reference values in a healthy population and understanding the fluctuations in biomarker concentrations throughout life and between sexes are critical to clinical interpretation of laboratory test results in different disease states. The Canadian Health Measures Survey (CHMS) has collected blood samples and health information from the Canadian household population. In collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), the data have been analyzed to determine reference value distributions and reference intervals for several endocrine and special chemistry biomarkers in pediatric, adult, and geriatric age groups. CHMS collected data and blood samples from thousands of community participants aged 3 to 79 years. We used serum samples to measure 13 immunoassay-based special chemistry and endocrine markers. We assessed reference value distributions and, after excluding outliers, calculated age- and sex-specific reference intervals, along with corresponding 90% CIs, according to CLSI C28-A3 guidelines. We observed fluctuations in biomarker reference values across the pediatric, adult, and geriatric age range, with stratification required on the basis of age for all analytes. Additional sex partitions were required for apolipoprotein AI, homocysteine, ferritin, and high sensitivity C-reactive protein. The unique collaboration between CALIPER and CHMS has enabled, for the first time, a detailed examination of the changes in various immunochemical markers that occur in healthy individuals of different ages. The robust age- and sex-specific reference intervals established in this study provide insight into the complex biological changes that take place throughout development and aging and will contribute to improved clinical test interpretation. © 2015 American Association for Clinical Chemistry.

  18. Nurse confidence in gynaecological oncology practice and the evaluation of a professional development module.

    PubMed

    Philp, Shannon; Barnett, Catherine; D'Abrew, Natalie; White, Kate

    2017-04-01

    A tertiary-based education program on gynaecological oncology was attended by 62 registered nurses (RN). The program aimed to update nurses' knowledge, improve skills and ability to manage common situations and to assess program efficacy. Evaluation framework with specifically designed pre-post questionnaire about program content and nurse confidence. RN interested in gynaecological oncology were invited to attend. Nurses rated their confidence about gynaecological oncology skills one week prior to the program, immediately post-course, 3 months post and 12 months post. Speaker presentations were evaluated immediately post-course. Participants indicated improved confidence immediately after participating in the course (z = -6.515, p < .001); whilst confidence subsequently declined and stabilised up to 12 months post-course, it still remained significantly higher than before the course: 3 months post- (z = -5.284, p < .001) and 12 months post- (z = -4.155, p < .001). Results support the value of continuing professional education for improving nurse confidence in the gynaecological oncology setting.

  19. The Impact of Age Stereotypes on Older Adults' Hazard Perception Performance and Driving Confidence.

    PubMed

    Chapman, Lyn; Sargent-Cox, Kerry; Horswill, Mark S; Anstey, Kaarin J

    2016-06-01

    This study examined the effect of age-stereotype threat on older adults' performance on a task measuring hazard perception performance in driving. The impact of age-stereotype threat in relation to the value participants placed on driving and pre- and post-task confidence in driving ability was also investigated. Eighty-six adults aged from 65 years of age completed a questionnaire measuring demographic information, driving experience, self-rated health, driving importance, and driving confidence. Prior to undertaking a timed hazard perception task, participants were exposed to either negative or positive age stereotypes. Results showed that age-stereotype threats, while not influencing hazard perception performance, significantly reduced post-driving confidence compared with pre-driving confidence for those in the negative prime condition. This finding builds on the literature that has found that stereotype-based influences cannot simply be understood in terms of performance outcomes alone and may be relevant to factors affected by confidence such as driving cessation decisions. © The Author(s) 2014.

  20. Confidence in critical care nursing.

    PubMed

    Evans, Jeanne; Bell, Jennifer L; Sweeney, Annemarie E; Morgan, Jennifer I; Kelly, Helen M

    2010-10-01

    The purpose of the study was to gain an understanding of the nursing phenomenon, confidence, from the experience of nurses in the nursing subculture of critical care. Leininger's theory of cultural care diversity and universality guided this qualitative descriptive study. Questions derived from the sunrise model were used to elicit nurses' perspectives about cultural and social structures that exist within the critical care nursing subculture and the influence that these factors have on confidence. Twenty-eight critical care nurses from a large Canadian healthcare organization participated in semistructured interviews about confidence. Five themes arose from the descriptions provided by the participants. The three themes, tenuously navigating initiation rituals, deliberately developing holistic supportive relationships, and assimilating clinical decision-making rules were identified as social and cultural factors related to confidence. The remaining two themes, preserving a sense of security despite barriers and accommodating to diverse challenges, were identified as environmental factors related to confidence. Practice and research implications within the culture of critical care nursing are discussed in relation to each of the themes.

  1. Poor Positive Predictive Value of Lyme Disease Serologic Testing in an Area of Low Disease Incidence

    PubMed Central

    Lantos, Paul M.; Branda, John A.; Boggan, Joel C.; Chudgar, Saumil M.; Wilson, Elizabeth A.; Ruffin, Felicia; Fowler, Vance; Auwaerter, Paul G.; Nigrovic, Lise E.

    2015-01-01

    Background. Lyme disease is diagnosed by 2-tiered serologic testing in patients with a compatible clinical illness, but the significance of positive test results in low-prevalence regions has not been investigated. Methods. We reviewed the medical records of patients who tested positive for Lyme disease with standardized 2-tiered serologic testing between 2005 and 2010 at a single hospital system in a region with little endemic Lyme disease. Based on clinical findings, we calculated the positive predictive value of Lyme disease serology. Next, we reviewed the outcome of serologic testing in patients with select clinical syndromes compatible with disseminated Lyme disease (arthritis, cranial neuropathy, or meningitis). Results. During the 6-year study period 4723 patients were tested for Lyme disease, but only 76 (1.6%) had positive results by established laboratory criteria. Among 70 seropositive patients whose medical records were available for review, 12 (17%; 95% confidence interval, 9%–28%) were found to have Lyme disease (6 with documented travel to endemic regions). During the same time period, 297 patients with a clinical illness compatible with disseminated Lyme disease underwent 2-tiered serologic testing. Six of them (2%; 95% confidence interval, 0.7%–4.3%) were seropositive, 3 with documented travel and 1 who had an alternative diagnosis that explained the clinical findings. Conclusions. In this low-prevalence cohort, fewer than 20% of positive Lyme disease tests are obtained from patients with clinically likely Lyme disease. Positive Lyme disease test results may have little diagnostic value in this setting. PMID:26195017

  2. Women's beliefs about the purpose and value of routine pelvic examinations.

    PubMed

    Norrell, Laura L; Kuppermann, Miriam; Moghadassi, Michelle N; Sawaya, George F

    2017-07-01

    The American Congress of Obstetricians and Gynecologists recommends that a pelvic examination be offered to asymptomatic women after an informed discussion with their provider. Although the adverse health outcomes that the examination averts were not delineated, the organization stated that it helps establish open communication between patients and physicians. Recent surveys have focused on obstetrician-gynecologists' attitudes and beliefs about the examination, but the perspectives of women have not been well-characterized. The purpose of this study was to better understand women's beliefs about the purpose and value of routine pelvic examinations. We completed structured interviews with 262 women who were 21-65 years old who agreed to participate in a 50-minute interview about cervical cancer screening. Recruitment took place in outpatient women's clinics at a public hospital and an academic medical center in San Francisco, CA. Women were shown an illustration of a bimanual pelvic examination and asked a series of closed-ended questions: if they knew why it was performed, if it reassured them of their health, and if they believed it helped establish open communication with their provider. Women were asked an open-ended question about their perception of the examination's purpose. Multivariable logistic regression analysis was used to identify demographic predictors of responses. Approximately one-half of the participants (56%) stated that they knew the examination's purpose. The most frequently cited reason was assurance of normalcy. Most of participants (82%) believed that the examination reassured them of their health. Approximately two-thirds of the participants (62%) believed that the examination helps establish open communication with their provider. In multivariate analyses, older age (≥45 years) independently predicted a higher likelihood of a belief that they knew the examination's purpose (odds ratio, 2.9; 95% confidence interval, 1.5-5.6) and a

  3. Biochemical marker reference values across pediatric, adult, and geriatric ages: establishment of robust pediatric and adult reference intervals on the basis of the Canadian Health Measures Survey.

    PubMed

    Adeli, Khosrow; Higgins, Victoria; Nieuwesteeg, Michelle; Raizman, Joshua E; Chen, Yunqi; Wong, Suzy L; Blais, David

    2015-08-01

    Biological covariates such as age and sex can markedly influence biochemical marker reference values, but no comprehensive study has examined such changes across pediatric, adult, and geriatric ages. The Canadian Health Measures Survey (CHMS) collected comprehensive nationwide health information and blood samples from children and adults in the household population and, in collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), examined biological changes in biochemical markers from pediatric to geriatric age, establishing a comprehensive reference interval database for routine disease biomarkers. The CHMS collected health information, physical measurements, and biosamples (blood and urine) from approximately 12 000 Canadians aged 3-79 years and measured 24 biochemical markers with the Ortho Vitros 5600 FS analyzer or a manual microplate. By use of CLSI C28-A3 guidelines, we determined age- and sex-specific reference intervals, including corresponding 90% CIs, on the basis of specific exclusion criteria. Biochemical marker reference values exhibited dynamic changes from pediatric to geriatric age. Most biochemical markers required some combination of age and/or sex partitioning. Two or more age partitions were required for all analytes except bicarbonate, which remained constant throughout life. Additional sex partitioning was required for most biomarkers, except bicarbonate, total cholesterol, total protein, urine iodine, and potassium. Understanding the fluctuations in biochemical markers over a wide age range provides important insight into biological processes and facilitates clinical application of biochemical markers to monitor manifestation of various disease states. The CHMS-CALIPER collaboration addresses this important evidence gap and allows the establishment of robust pediatric and adult reference intervals. © 2015 American Association for Clinical Chemistry.

  4. Interleukin-1β gene variants are associated with QTc interval prolongation following cardiac surgery: a prospective observational study.

    PubMed

    Kertai, Miklos D; Ji, Yunqi; Li, Yi-Ju; Mathew, Joseph P; Daubert, James P; Podgoreanu, Mihai V

    2016-04-01

    We characterized cardiac surgery-induced dynamic changes of the corrected QT (QTc) interval and tested the hypothesis that genetic factors are associated with perioperative QTc prolongation independent of clinical and procedural factors. All study subjects were ascertained from a prospective study of patients who underwent elective cardiac surgery during August 1999 to April 2002. We defined a prolonged QTc interval as > 440 msec, measured from 24-hr pre- and postoperative 12-lead electrocardiograms. The association of 37 single nucleotide polymorphisms (SNPs) in 21 candidate genes -involved in modulating arrhythmia susceptibility pathways with postoperative QTc changes- was investigated in a two-stage design with a stage I cohort (n = 497) nested within a stage II cohort (n = 957). Empirical P values (Pemp) were obtained by permutation tests with 10,000 repeats. After adjusting for clinical and procedural risk factors, we selected four SNPs (P value range, 0.03-0.1) in stage I, which we then tested in the stage II cohort. Two functional SNPs in the pro-inflammatory cytokine interleukin-1β (IL1β), rs1143633 (odds ratio [OR], 0.71; 95% confidence interval [CI], 0.53 to 0.95; Pemp = 0.02) and rs16944 (OR, 1.31; 95% CI, 1.01 to 1.70; Pemp = 0.04), remained independent predictors of postoperative QTc prolongation. The ability of a clinico-genetic model incorporating the two IL1B polymorphisms to classify patients at risk for developing prolonged postoperative QTc was superior to a clinical model alone, with a net reclassification improvement of 0.308 (P = 0.0003) and an integrated discrimination improvement of 0.02 (P = 0.000024). The results suggest a contribution of IL1β in modulating susceptibility to postoperative QTc prolongation after cardiac surgery.

  5. Interleukin-1β gene variants are associated with QTc interval prolongation following cardiac surgery: a prospective observational study

    PubMed Central

    Kertai, Miklos D.; Ji, Yunqi; Li, Yi-Ju; Mathew, Joseph P.; Daubert, James P.; Podgoreanu, Mihai V.

    2016-01-01

    Background We characterized cardiac surgery-induced dynamic changes of the corrected QT (QTc) interval and tested the hypothesis that genetic factors are associated with perioperative QTc prolongation independent of clinical and procedural factors. Methods All study subjects were ascertained from a prospective study of patients who underwent elective cardiac surgery during August 1999 to April 2002. We defined a prolonged QTc interval as >440 msec, measured from 24-hr pre- and postoperative 12-lead electrocardiograms. The association of 37 single nucleotide polymorphisms (SNPs) in 21 candidate genes – involved in modulating arrhythmia susceptibility pathways with postoperative QTc changes–was investigated in a two-stage design with a stage I cohort (n = 497) nested within a stage II cohort (n = 957). Empirical P values (Pemp) were obtained by permutation tests with 10,000 repeats. Results After adjusting for clinical and procedural risk factors, we selected four SNPs (P value range, 0.03-0.1) in stage I, which we then tested in the stage II cohort. Two functional SNPs in the pro-inflammatory cytokine interleukin-1β (IL1β), rs1143633 (odds ratio [OR], 0.71; 95% confidence interval [CI], 0.53 to 0.95; Pemp = 0.02) and rs16944 (OR, 1.31; 95% CI, 1.01 to 1.70; Pemp = 0.04), remained independent predictors of postoperative QTc prolongation. The ability of a clinico-genetic model incorporating the two IL1B polymorphisms to classify patients at risk for developing prolonged postoperative QTc was superior to a clinical model alone, with a net reclassification improvement of 0.308 (P = 0.0003) and an integrated discrimination improvement of 0.02 (P = 0.000024). Conclusion The results suggest a contribution of IL1β in modulating susceptibility to postoperative QTc prolongation after cardiac surgery. PMID:26858093

  6. Prognostic Value of RUNX1 Mutations in AML: A Meta-Analysis

    PubMed

    Jalili, Mahdi; Yaghmaie, Marjan; Ahmadvand, Mohammad; Alimoghaddam, Kamran; Mousavi, Seyed Asadollah; Vaezi, Mohammad; Ghavamzadeh, Ardeshir

    2018-02-26

    The RUNX1 (AML1) gene is a relatively infrequent mutational target in cases of acute myeloid leukemia (AML). Previous work indicated that RUNX1 mutations can have pathological and prognostic implications. To evaluate prognostic value, we conducted a meta-analysis of 4 previous published works with data for survival according to RUNX1 mutation status. Pooled hazard ratios for overall survival and disease-free survival were 1.55 (95% confidence interval (CI) = 1.11–2.15; p-value = 0.01) and 1.76 (95% CI = 1.24–2.52; p-value = 0.002), respectively, for cases positive for RUNX1 mutations. This evidence supports clinical implications of RUNX1 mutations in the development and progression of AML cases and points to the possibility of a distinct category within the newer WHO classification. Though it must be kept in mind that the present work was based on data extracted from observational studies, the findings suggest that the RUNX1 status can contribute to risk-stratification and decision-making in management of AML. Creative Commons Attribution License

  7. A model for developing disability confidence.

    PubMed

    Lindsay, Sally; Cancelliere, Sara

    2017-05-15

    Many clinicians, educators, and employers lack disability confidence which can affect their interactions with, and inclusion of people with disabilities. Our objective was to explore how disability confidence developed among youth who volunteered with children who have a disability. We conducted 30 in-depth interviews (16 without a disability, 14 with disabilities), with youth aged 15-25. We analyzed our data using an interpretive, qualitative, thematic approach. We identified four main themes that led to the progression of disability confidence including: (1) "disability discomfort," referring to lacking knowledge about disability and experiencing unease around people with disabilities; (2) "reaching beyond comfort zone" where participants increased their understanding of disability and became sensitized to difference; (3) "broadened perspectives" where youth gained exposure to people with disabilities and challenged common misperceptions and stereotypes; and (4) "disability confidence" which includes having knowledge of people with disabilities, inclusive, and positive attitudes towards them. Volunteering is one way that can help to develop disability confidence. Youth with and without disabilities both reported a similar process of developing disability confidence; however, there were nuances between the two groups. Implications for Rehabilitation The development of disability confidence is important for enhancing the social inclusion of people with disabilities. Volunteering with people who have a disability, or a disability different from their own, can help to develop disability confidence which involves positive attitudes, empathy, and appropriate communication skills. Clinicians, educators, and employers should consider promoting working with disabled people through such avenues as volunteering or service learning to gain disability confidence.

  8. Pediatric reference value distributions and covariate-stratified reference intervals for 29 endocrine and special chemistry biomarkers on the Beckman Coulter Immunoassay Systems: a CALIPER study of healthy community children.

    PubMed

    Karbasy, Kimiya; Lin, Danny C C; Stoianov, Alexandra; Chan, Man Khun; Bevilacqua, Victoria; Chen, Yunqi; Adeli, Khosrow

    2016-04-01

    The CALIPER program is a national research initiative aimed at closing the gaps in pediatric reference intervals. CALIPER previously reported reference intervals for endocrine and special chemistry markers on Abbott immunoassays. We now report new pediatric reference intervals for immunoassays on the Beckman Coulter Immunoassay Systems and assess platform-specific differences in reference values. A total of 711 healthy children and adolescents from birth to <19 years of age were recruited from the community. Serum samples were collected for measurement of 29 biomarkers on the Beckman Coulter Immunoassay Systems. Statistically relevant age and/or gender-based partitions were determined, outliers removed, and reference intervals calculated in accordance with Clinical and Laboratory Standards Institute (CLSI) EP28-A3c guidelines. Complex profiles were observed for all 29 analytes, necessitating unique age and/or sex-specific partitions. Overall, changes in analyte concentrations observed over the course of development were similar to trends previously reported, and are consistent with biochemical and physiological changes that occur during childhood. Marked differences were observed for some assays including progesterone, luteinizing hormone and follicle-stimulating hormone where reference intervals were higher than those reported on Abbott immunoassays and parathyroid hormone where intervals were lower. This study highlights the importance of determining reference intervals specific for each analytical platform. The CALIPER Pediatric Reference Interval database will enable accurate diagnosis and laboratory assessment of children monitored by Beckman Coulter Immunoassay Systems in health care institutions worldwide. These reference intervals must however be validated by individual labs for the local pediatric population as recommended by CLSI.

  9. Diagnostic value of a pancreatic mass on computed tomography in patients undergoing pancreatoduodenectomy for presumed pancreatic cancer.

    PubMed

    Gerritsen, Arja; Bollen, Thomas L; Nio, C Yung; Molenaar, I Quintus; Dijkgraaf, Marcel G W; van Santvoort, Hjalmar C; Offerhaus, G Johan; Brosens, Lodewijk A; Biermann, Katharina; Sieders, Egbert; de Jong, Koert P; van Dam, Ronald M; van der Harst, Erwin; van Goor, Harry; van Ramshorst, Bert; Bonsing, Bert A; de Hingh, Ignace H; Gerhards, Michael F; van Eijck, Casper H; Gouma, Dirk J; Borel Rinkes, Inne H M; Busch, Olivier R C; Besselink, Marc G H

    2015-07-01

    Previous studies have shown that 5-14% of patients undergoing pancreatoduodenectomy for suspected malignancy ultimately are diagnosed with benign disease. A "pancreatic mass" on computed tomography (CT) is considered to be the strongest predictor of malignancy, but studies describing its diagnostic value are lacking. The aim of this study was to determine the diagnostic value of a pancreatic mass on CT in patients with presumed pancreatic cancer, as well as the interobserver agreement among radiologists and the additional value of reassessment by expert-radiologists. Reassessment of preoperative CT scans was performed within a previously described multicenter retrospective cohort study in 344 patients undergoing pancreatoduodenectomy for suspected malignancy (2003-2010). Preoperative CT scans were reassessed by 2 experienced abdominal radiologists separately and subsequently in a consensus meeting, after defining a pancreatic mass as "a measurable space occupying soft tissue density, except for an enlarged papilla or focal steatosis". CT scans of 86 patients with benign and 258 patients with (pre)malignant disease were reassessed. In 66% of patients a pancreatic mass was reported in the original CT report, versus 48% and 50% on reassessment by the 2 expert radiologists separately and 44% in consensus (P < .001 vs original report). Interobserver agreement between the original CT report and expert consensus was fair (kappa = 0.32, 95% confidence interval 0.23-0.42). Among both expert-radiologists agreement was moderate (kappa = 0.47, 95% confidence interval 0.38-0.56), with disagreement on the presence of a pancreatic mass in 29% of cases. The specificity for malignancy of pancreatic masses identified in expert consensus was twice as high compared with the original CT report (87% vs 42%, respectively). Positive predictive value increased to 98% after expert consensus, but negative predictive value was low (12%). Clinicians need to be aware of potential considerable

  10. Interval Training

    MedlinePlus

    Healthy Lifestyle Fitness Interval training can help you get the most out of your workout. By Mayo Clinic Staff Are you ready to shake ... spending more time at the gym? Consider aerobic interval training. Once the domain of elite athletes, interval training ...

  11. Epidemiology and the law: courts and confidence intervals.

    PubMed Central

    Christoffel, T; Teret, S P

    1991-01-01

    Beginning with the swine flu litigation of the early 1980s, epidemiological evidence has played an increasingly prominent role in helping the nation's courts deal with alleged causal connections between plaintiffs' diseases or other harm and exposure to specific noxious agents (such as asbestos, toxic waste, radiation, and pharmaceuticals). Judicial reliance on epidemiology has high-lighted the contrast between the nature of scientific proof and of legal proof. Epidemiologists need to recognize and understand the growing involvement of their profession in complex tort litigation. PMID:1746668

  12. Estimation of Confidence Intervals for Multiplication and Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J

    2009-07-17

    Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoreticalmore » count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.« less

  13. Confidence Intervals for Omega Coefficient: Proposal for Calculus.

    PubMed

    Ventura-León, José Luis

    2018-01-01

    La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.

  14. CIMP status of interval colon cancers: another piece to the puzzle.

    PubMed

    Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

    2010-05-01

    Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1

  15. Emotional Confidants in Ethnic Communities: Social Network Analysis of Korean American Older Adults

    PubMed Central

    Jang, Yuri; Kim, Kyungmin; Park, Nan Sook; Chiriboga, David A.

    2017-01-01

    Objective Ethnic communities often serve as the primary source of emotional support for older immigrants. This study aims to identify individuals who are more likely to be nominated as emotional confidants by age peers in the ethnic community and to examine factors contributing to the likelihood of being a more frequently endorsed confidant. Method Data were drawn from a survey with 675 older Korean Americans. Using the name-generator approach in Social Network Analysis (SNA), participants were asked to list the names of three emotional confidants among age peers in the community. Results A higher level of popularity (i.e., in-degree centrality) was predicted by male gender, advanced education, lower functional disability, fewer symptoms of depression, and higher levels of participation in social activities. Discussion: Our findings suggest the value of SNA as a means of identifying the key emotional confidants in the community and utilizing them in community-based interventions. PMID:26082133

  16. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.

    PubMed

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

    2017-04-01

    Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework

  17. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  18. The Institution of Advertising: Predictors of Cross-National Differences in Consumer Confidence.

    ERIC Educational Resources Information Center

    Zinkhan, George M.; Balazs, Anne L.

    1998-01-01

    Contributes to scholarship on advertising and cross-cultural studies by exploring cultural factors affecting customer confidence in advertising. Uses a sample of 16 European nations to test G. Hofstede's theory of cross-national values. Finds that Hofstede's dimensions of uncertainty avoidance, masculinity, and individualism are important…

  19. The dose delivery effect of the different Beam ON interval in FFF SBRT: TrueBEAM

    NASA Astrophysics Data System (ADS)

    Tawonwong, T.; Suriyapee, S.; Oonsiri, S.; Sanghangthum, T.; Oonsiri, P.

    2016-03-01

    The purpose of this study is to determine the dose delivery effect of the different Beam ON interval in Flattening Filter Free Stereotactic Body Radiation Therapy (FFF-SBRT). The three 10MV-FFF SBRT plans (2 half rotating Rapid Arc, 9 to10 Gray/Fraction) were selected and irradiated in three different intervals (100%, 50% and 25%) using the RPM gating system. The plan verification was performed by the ArcCHECK for gamma analysis and the ionization chamber for point dose measurement. The dose delivery time of each interval were observed. For gamma analysis (2%&2mm criteria), the average percent pass of all plans for 100%, 50% and 25% intervals were 86.1±3.3%, 86.0±3.0% and 86.1±3.3%, respectively. For point dose measurement, the average ratios of each interval to the treatment planning were 1.012±0.015, 1.011±0.014 and 1.011±0.013 for 100%, 50% and 25% interval, respectively. The average dose delivery time was increasing from 74.3±5.0 second for 100% interval to 154.3±12.6 and 347.9±20.3 second for 50% and 25% interval, respectively. The same quality of the dose delivery from different Beam ON intervals in FFF-SBRT by TrueBEAM was illustrated. While the 100% interval represents the breath-hold treatment technique, the differences for the free-breathing using RPM gating system can be treated confidently.

  20. Intervality and coherence in complex networks

    NASA Astrophysics Data System (ADS)

    Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.

    2016-06-01

    Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.

  1. Lower Leg Injury Reference Values and Risk Curves from Survival Analysis for Male and Female Dummies: Meta-analysis of Postmortem Human Subject Tests.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Banerjee, Anjishnu

    2015-01-01

    Derive lower leg injury risk functions using survival analysis and determine injury reference values (IRV) applicable to human mid-size male and small-size female anthropometries by conducting a meta-analysis of experimental data from different studies under axial impact loading to the foot-ankle-leg complex. Specimen-specific dynamic peak force, age, total body mass, and injury data were obtained from tests conducted by applying the external load to the dorsal surface of the foot of postmortem human subject (PMHS) foot-ankle-leg preparations. Calcaneus and/or tibia injuries, alone or in combination and with/without involvement of adjacent articular complexes, were included in the injury group. Injury and noninjury tests were included. Maximum axial loads recorded by a load cell attached to the proximal end of the preparation were used. Data were analyzed by treating force as the primary variable. Age was considered as the covariate. Data were censored based on the number of tests conducted on each specimen and whether it remained intact or sustained injury; that is, right, left, and interval censoring. The best fits from different distributions were based on the Akaike information criterion; mean and plus and minus 95% confidence intervals were obtained; and normalized confidence interval sizes (quality indices) were determined at 5, 10, 25, and 50% risk levels. The normalization was based on the mean curve. Using human-equivalent age as 45 years, data were normalized and risk curves were developed for the 50th and 5th percentile human size of the dummies. Out of the available 114 tests (76 fracture and 38 no injury) from 5 groups of experiments, survival analysis was carried out using 3 groups consisting of 62 tests (35 fracture and 27 no injury). Peak forces associated with 4 specific risk levels at 25, 45, and 65 years of age are given along with probability curves (mean and plus and minus 95% confidence intervals) for PMHS and normalized data applicable to

  2. Training health professionals to recruit into challenging randomized controlled trials improved confidence: the development of the QuinteT randomized controlled trial recruitment training intervention.

    PubMed

    Mills, Nicola; Gaunt, Daisy; Blazeby, Jane M; Elliott, Daisy; Husbands, Samantha; Holding, Peter; Rooshenas, Leila; Jepson, Marcus; Young, Bridget; Bower, Peter; Tudur Smith, Catrin; Gamble, Carrol; Donovan, Jenny L

    2018-03-01

    The objective of this study was to describe and evaluate a training intervention for recruiting patients to randomized controlled trials (RCTs), particularly for those anticipated to be difficult for recruitment. One of three training workshops was offered to surgeons and one to research nurses. Self-confidence in recruitment was measured through questionnaires before and up to 3 months after training; perceived impact of training on practice was assessed after. Data were analyzed using two-sample t-tests and supplemented with findings from the content analysis of free-text comments. Sixty-seven surgeons and 32 nurses attended. Self-confidence scores for all 10 questions increased after training [range of mean scores before 5.1-6.9 and after 6.9-8.2 (scale 0-10, all 95% confidence intervals are above 0 and all P-values <0.05)]. Awareness of hidden challenges of recruitment following training was high-surgeons' mean score 8.8 [standard deviation (SD), 1.2] and nurses' 8.4 (SD, 1.3) (scale 0-10); 50% (19/38) of surgeons and 40% (10/25) of nurses reported on a 4-point Likert scale that training had made "a lot" of difference to their RCT discussions. Analysis of free text revealed this was mostly in relation to how to convey equipoise, explain randomization, and manage treatment preferences. Surgeons and research nurses reported increased self-confidence in discussing RCTs with patients, a raised awareness of hidden challenges and a positive impact on recruitment practice following QuinteT RCT Recruitment Training. Training will be made more available and evaluated in relation to recruitment rates and informed consent. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. [Sources of leader's confidence in organizations].

    PubMed

    Ikeda, Hiroshi; Furukawa, Hisataka

    2006-04-01

    The purpose of this study was to examine the sources of confidence that organization leaders had. As potential sources of the confidence, we focused on fulfillment of expectations made by self and others, reflection on good as well as bad job experiences, and awareness of job experiences in terms of commonality, differentiation, and multiple viewpoints. A questionnaire was administered to 170 managers of Japanese companies. Results were as follows: First, confidence in leaders was more strongly related to fulfillment of expectations made by self and others than reflection on and awareness of job experiences. Second, the confidence was weakly related to internal processing of job experiences, in the form of commonality awareness and reflection on good job experiences. And finally, years of managerial experiences had almost no relation to the confidence. These findings suggested that confidence in leaders was directly acquired from fulfillment of expectations made by self and others, rather than indirectly through internal processing of job experiences. Implications of the findings for leadership training were also discussed.

  4. Detectability of auditory signals presented without defined observation intervals

    NASA Technical Reports Server (NTRS)

    Watson, C. S.; Nichols, T. L.

    1976-01-01

    Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

  5. Confidence set inference with a prior quadratic bound

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1989-01-01

    In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z=(z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y (sup 0) = (y (sub 1) (sup 0), ..., y (sub D (sup 0)), using full or partial knowledge of the statistical distribution of the random errors in y (sup 0). The data space Y containing y(sup 0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x, Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. Confidence set inference is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface.

  6. Confidence-Building Measures in Philippine Security.

    DTIC Science & Technology

    1998-05-01

    service or government agency. STRATEGY RESEARCH PROJECT i CONFIDENCE-BUILDING MEASURES IN PHILIPPINE SECURITY BY LIEUTENANT COLONEL RAMON G...WAR COLLEGE, CARLISLE BARRACKS, PA 17013-5050 rimo*’^»®*raBl USAWC STRATEGY RESEARCH PROJECT CONFIDENCE-BUILDING MEASURES IN PHILIPPINE...Colonel Ramon Santos, Philippine Army TITLE: Confidence-Building Measures in Philippine Security FORMAT: Strategy Research Project DATE: 1

  7. Interpregnancy Interval and Adverse Pregnancy Outcomes: An Analysis of Successive Pregnancies.

    PubMed

    Hanley, Gillian E; Hutcheon, Jennifer A; Kinniburgh, Brooke A; Lee, Lily

    2017-03-01

    To examine the association between interpregnancy interval and maternal-neonate health when matching women to their successive pregnancies to control for differences in maternal risk factors and compare these results with traditional unmatched designs. We conducted a retrospective cohort study of 38,178 women with three or more deliveries (two or greater interpregnancy intervals) between 2000 and 2015 in British Columbia, Canada. We examined interpregnancy interval (0-5, 6-11, 12-17, 18-23 [reference], 24-59, and 60 months or greater) in relation to neonatal outcomes (preterm birth [less than 37 weeks of gestation], small-for-gestational-age birth [less than the 10th centile], use of neonatal intensive care, low birth weight [less than 2,500 g]) and maternal outcomes (gestational diabetes, beginning the subsequent pregnancy obese [body mass index 30 or greater], and preeclampsia-eclampsia). We used conditional logistic regression to compare interpregnancy intervals within the same mother and unconditional (unmatched) logistic regression to enable comparison with prior research. Analyses using the traditional unmatched design showed significantly increased risks associated with short interpregnancy intervals (eg, there were 232 preterm births [12.8%] in 0-5 months compared with 501 [8.2%] in the 18-23 months reference group; adjusted odds ratio [OR] for preterm birth 1.53, 95% confidence interval [CI] 1.35-1.73). However, these risks were eliminated in within-woman matched analyses (adjusted OR for preterm birth 0.85, 95% CI 0.71-1.02). Matched results indicated that short interpregnancy intervals were significantly associated with increased risk of gestational diabetes (adjusted OR 1.35, 95% CI 1.02-1.80 for 0-5 months) and beginning the subsequent pregnancy obese (adjusted OR 1.61, 95% CI 1.05-2.45 for 0-5 months and adjusted OR 1.43, 95% CI 1.10-1.87 for 6-11 months). Previously reported associations between short interpregnancy intervals and adverse neonatal

  8. Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

    NASA Astrophysics Data System (ADS)

    Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.

    2003-12-01

    Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.

  9. Predictors and Prognostic Value of Worsening Renal Function During Admission in HFpEF Versus HFrEF: Data From the KorAHF (Korean Acute Heart Failure) Registry.

    PubMed

    Kang, Jeehoon; Park, Jin Joo; Cho, Young-Jin; Oh, Il-Young; Park, Hyun-Ah; Lee, Sang Eun; Kim, Min-Seok; Cho, Hyun-Jai; Lee, Hae-Young; Choi, Jin Oh; Hwang, Kyung-Kuk; Kim, Kye Hun; Yoo, Byung-Su; Kang, Seok-Min; Baek, Sang Hong; Jeon, Eun-Seok; Kim, Jae-Joong; Cho, Myeong-Chan; Chae, Shung Chull; Oh, Byung-Hee; Choi, Dong-Ju

    2018-03-13

    Worsening renal function (WRF) is associated with adverse outcomes in patients with heart failure. We investigated the predictors and prognostic value of WRF during admission, in patients with preserved ejection fraction (HFpEF) versus those with reduced ejection fraction (HFrEF). A total of 5625 patients were enrolled in the KorAHF (Korean Acute Heart Failure) registry. WRF was defined as an absolute increase in creatinine of ≥0.3 mg/dL. Transient WRF was defined as recovery of creatinine at discharge, whereas persistent WRF was indicated by a nonrecovered creatinine level. HFpEF and HFrEF were defined as a left ventricle ejection fraction ≥50% and ≤40%, respectively. Among the total population, WRF occurred in 3101 patients (55.1%). By heart failure subgroup, WRF occurred more frequently in HFrEF (57.0% versus 51.3%; P <0.001 in HFrEF and HFpEF). Prevalence of WRF increased as creatinine clearance decreased in both heart failure subgroups. Among various predictors of WRF, chronic renal failure was the strongest predictor. WRF was an independent predictor of adverse in-hospital outcomes (HFrEF: odds ratio; 2.75; 95% confidence interval, 1.50-5.02; P =0.001; HFpEF: odds ratio, 9.48; 95% confidence interval, 1.19-75.89; P =0.034) and 1-year mortality (HFrEF: hazard ratio, 1.41; 95% confidence interval, 1.12-1.78; P =0.004 versus HFpEF: hazard ratio, 1.72; 95% confidence interval, 1.23-2.42; P =0.002). Transient WRF was a risk factor for 1-year mortality, whereas persistent WRF had no additive risk compared to transient WRF. In patients with acute heart failure patients, WRF is an independent predictor of adverse in-hospital and follow-up outcomes in both HFrEF and HFpEF, though with a different effect size. URL: https://www.clinicaltrials.gov. Unique identifier: NCT01389843. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  10. On-line confidence monitoring during decision making.

    PubMed

    Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas

    2018-02-01

    Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. New Estimates of Intergenerational Time Intervals for the Calculation of Age and Origins of Mutations

    PubMed Central

    Tremblay, Marc; Vézina, Hélène

    2000-01-01

    Summary Intergenerational time intervals are frequently used in human population-genetics studies concerned with the ages and origins of mutations. In most cases, mean intervals of 20 or 25 years are used, regardless of the demographic characteristics of the population under study. Although these characteristics may vary from prehistoric to historical times, we suggest that this value is probably too low, and that the ages of some mutations may have been underestimated. Analyses were performed by using the BALSAC Population Register (Quebec, Canada), from which several intergenerational comparisons can be made. Family reconstitutions were used to measure interval lengths and variations in descending lineages. Various parameters were considered, such as spouse age at marriage, parental age, and reproduction levels. Mother-child and father-child intervals were compared. Intergenerational male and female intervals were also analyzed in 100 extended ascending genealogies. Results showed that a mean value of 30 years is a better estimate of intergenerational intervals than 20 or 25 years. As marked differences between male and female interval length were observed, specific values are proposed for mtDNA, autosomal, X-chromosomal, and Y-chromosomal loci. The applicability of these results for age estimates of mutations is discussed. PMID:10677323

  12. Impact of confidence number on accuracy of the SureSight Vision Screener.

    PubMed

    2010-02-01

    To assess the relation between the confidence number provided by the Welch Allyn SureSight Vision Screener and screening accuracy, and to determine whether repeated testing to achieve a higher confidence number improves screening accuracy in pre-school children. Lay and nurse screeners screened 1452 children enrolled in the Vision in Preschoolers (VIP) Phase II Study. All children also underwent a comprehensive eye examination. By using statistical comparison of proportions, we examined sensitivity and specificity for detecting any ocular condition targeted for detection in the VIP study and conditions grouped by severity and by type (amblyopia, strabismus, significant refractive error, and unexplained decreased visual acuity) among children who had confidence numbers < or =4 (retest necessary), 5 (retest if possible), > or =6 (acceptable). Among the 687 (47.3%) children who had repeated testing by either lay or nurse screeners because of a low confidence number (<6) for one or both eyes in the initial testing, the same analyses were also conducted to compare results between the initial reading and repeated test reading with the highest confidence number in the same child. These analyses were based on the failure criteria associated with 90% specificity for detecting any VIP condition in VIP Phase II. A lower confidence number category were associated with higher sensitivity (0.71, 0.65, and 0.59 for < or =4, 5, and > or =6, respectively, p = 0.04) but no statistical difference in specificity (0.85, 0.85, and 0.91, p = 0.07) of detecting any VIP-targeted condition. Children with any VIP-targeted condition were as likely to be detected using the initial confidence number reading as using the higher confidence number reading from repeated testing. A higher confidence number obtained during screening with the SureSight Vision Screener is not associated with better screening accuracy. Repeated testing to reach the manufacturer's recommended minimum value is not helpful

  13. Binary Interval Search: a scalable algorithm for counting interval intersections.

    PubMed

    Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R

    2013-01-01

    The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.

  14. Pediatric reference intervals for random urine calcium, phosphorus and total protein.

    PubMed

    Slev, Patricia R; Bunker, Ashley M; Owen, William E; Roberts, William L

    2010-09-01

    The aim of this study was to establish age appropriate reference intervals for calcium (Ca), phosphorus (P) and total protein (UTP) in random urine samples. All analytes were measured using the Roche MODULAR P analyzer and normalized to creatinine (Cr). Our study cohort consisted of 674 boys and 728 girls between 7 and 17 years old (y.o.), which allowed us to determine the central 95% reference intervals with 90% confidence intervals by non-parametric analysis partitioned by both gender and 2-year age intervals for each analyte [i.e. boys in age group 7-9 years (7-9 boys); girls in age group 7-9 years (7-9 girls), etc.]. Results for the upper limits of the central 95% reference interval were: for Ca/Cr, 0.27 (16,17 y.o.) to 0.46 mg/mg (7-9 y.o.) for the girls and 0.26 (16,17 y.o.) to 0.43 mg/mg (7-9 y.o.) for the boys; for P/Cr, 0.85 (16,17 y.o.) to 1.44 mg/mg (7-9 y.o.) for the girls and 0.87 (16,17 y.o.) to 1.68 mg/mg (7-9 y.o.) for the boys; for UTP/Cr, 0.30 (7-9 y.o.) to 0.34 mg/mg (10-12 y.o.) for the girls and 0.19 (16,17, y.o.) to 0.26 mg/mg (13-15 y.o.) for the boys. Upper reference limits decreased with increasing age, and age was a statistically significant variable for all analytes. Eight separate age- and gender-specific reference intervals are proposed per analyte.

  15. Sample size, confidence, and contingency judgement.

    PubMed

    Clément, Mélanie; Mercier, Pierre; Pastò, Luigi

    2002-06-01

    According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.

  16. The Effect of Adaptive Confidence Strategies in Computer-Assisted Instruction on Learning and Learner Confidence

    ERIC Educational Resources Information Center

    Warren, Richard Daniel

    2012-01-01

    The purpose of this research was to investigate the effects of including adaptive confidence strategies in instructionally sound computer-assisted instruction (CAI) on learning and learner confidence. Seventy-one general educational development (GED) learners recruited from various GED learning centers at community colleges in the southeast United…

  17. Evaluation of locally established reference intervals for hematology and biochemistry parameters in Western Kenya.

    PubMed

    Odhiambo, Collins; Oyaro, Boaz; Odipo, Richard; Otieno, Fredrick; Alemnji, George; Williamson, John; Zeh, Clement

    2015-01-01

    Important differences have been demonstrated in laboratory parameters from healthy persons in different geographical regions and populations, mostly driven by a combination of genetic, demographic, nutritional, and environmental factors. Despite this, European and North American derived laboratory reference intervals are used in African countries for patient management, clinical trial eligibility, and toxicity determination; which can result in misclassification of healthy persons as having laboratory abnormalities. An observational prospective cohort study known as the Kisumu Incidence Cohort Study (KICoS) was conducted to estimate the incidence of HIV seroconversion and identify determinants of successful recruitment and retention in preparation for an HIV vaccine/prevention trial among young adults and adolescents in western Kenya. Laboratory values generated from the KICoS were compared to published region-specific reference intervals and the 2004 NIH DAIDS toxicity tables used for the trial. About 1106 participants were screened for the KICoS between January 2007 and June 2010. Nine hundred and fifty-three participants aged 16 to 34 years, HIV-seronegative, clinically healthy, and non-pregnant were selected for this analysis. Median and 95% reference intervals were calculated for hematological and biochemistry parameters. When compared with both published region-specific reference values and the 2004 NIH DAIDS toxicity table, it was shown that the use of locally established reference intervals would have resulted in fewer participants classified as having abnormal hematological or biochemistry values compared to US derived reference intervals from DAIDS (10% classified as abnormal by local parameters vs. >40% by US DAIDS). Blood urea nitrogen was most often out of range if US based intervals were used: <10% abnormal by local intervals compared to >83% by US based reference intervals. Differences in reference intervals for hematological and biochemical

  18. Statistical regularities in the return intervals of volatility

    NASA Astrophysics Data System (ADS)

    Wang, F.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.

    2007-01-01

    We discuss recent results concerning statistical regularities in the return intervals of volatility in financial markets. In particular, we show how the analysis of volatility return intervals, defined as the time between two volatilities larger than a given threshold, can help to get a better understanding of the behavior of financial time series. We find scaling in the distribution of return intervals for thresholds ranging over a factor of 25, from 0.6 to 15 standard deviations, and also for various time windows from one minute up to 390 min (an entire trading day). Moreover, these results are universal for different stocks, commodities, interest rates as well as currencies. We also analyze the memory in the return intervals which relates to the memory in the volatility and find two scaling regimes, ℓ<ℓ* with α1=0.64±0.02 and ℓ> ℓ* with α2=0.92±0.04; these exponent values are similar to results of Liu et al. for the volatility. As an application, we use the scaling and memory properties of the return intervals to suggest a possibly useful method for estimating risk.

  19. Poor Positive Predictive Value of Lyme Disease Serologic Testing in an Area of Low Disease Incidence.

    PubMed

    Lantos, Paul M; Branda, John A; Boggan, Joel C; Chudgar, Saumil M; Wilson, Elizabeth A; Ruffin, Felicia; Fowler, Vance; Auwaerter, Paul G; Nigrovic, Lise E

    2015-11-01

    Lyme disease is diagnosed by 2-tiered serologic testing in patients with a compatible clinical illness, but the significance of positive test results in low-prevalence regions has not been investigated. We reviewed the medical records of patients who tested positive for Lyme disease with standardized 2-tiered serologic testing between 2005 and 2010 at a single hospital system in a region with little endemic Lyme disease. Based on clinical findings, we calculated the positive predictive value of Lyme disease serology. Next, we reviewed the outcome of serologic testing in patients with select clinical syndromes compatible with disseminated Lyme disease (arthritis, cranial neuropathy, or meningitis). During the 6-year study period 4723 patients were tested for Lyme disease, but only 76 (1.6%) had positive results by established laboratory criteria. Among 70 seropositive patients whose medical records were available for review, 12 (17%; 95% confidence interval, 9%-28%) were found to have Lyme disease (6 with documented travel to endemic regions). During the same time period, 297 patients with a clinical illness compatible with disseminated Lyme disease underwent 2-tiered serologic testing. Six of them (2%; 95% confidence interval, 0.7%-4.3%) were seropositive, 3 with documented travel and 1 who had an alternative diagnosis that explained the clinical findings. In this low-prevalence cohort, fewer than 20% of positive Lyme disease tests are obtained from patients with clinically likely Lyme disease. Positive Lyme disease test results may have little diagnostic value in this setting. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Binary Interval Search: a scalable algorithm for counting interval intersections

    PubMed Central

    Layer, Ryan M.; Skadron, Kevin; Robins, Gabriel; Hall, Ira M.; Quinlan, Aaron R.

    2013-01-01

    Motivation: The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. Availability: https://github.com/arq5x/bits. Contact: arq5x@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23129298

  1. Estimating the Time Interval Between Exposure to the World Trade Center Disaster and Incident Diagnoses of Obstructive Airway Disease

    PubMed Central

    Glaser, Michelle S.; Webber, Mayris P.; Zeig-Owens, Rachel; Weakley, Jessica; Liu, Xiaoxue; Ye, Fen; Cohen, Hillel W.; Aldrich, Thomas K.; Kelly, Kerry J.; Nolan, Anna; Weiden, Michael D.; Prezant, David J.; Hall, Charles B.

    2014-01-01

    Respiratory disorders are associated with occupational and environmental exposures. The latency period between exposure and disease onset remains uncertain. The World Trade Center (WTC) disaster presents a unique opportunity to describe the latency period for obstructive airway disease (OAD) diagnoses. This prospective cohort study of New York City firefighters compared the timing and incidence of physician-diagnosed OAD relative to WTC exposure. Exposure was categorized by WTC arrival time as high (on the morning of September 11, 2001), moderate (after noon on September 11, 2001, or on September 12, 2001), or low (during September 13–24, 2001). We modeled relative rates and 95% confidence intervals of OAD incidence by exposure over the first 5 years after September 11, 2001, estimating the times of change in the relative rate with change point models. We observed a change point at 15 months after September 11, 2001. Before 15 months, the relative rate for the high- versus low-exposure group was 3.96 (95% confidence interval: 2.51, 6.26) and thereafter, it was 1.76 (95% confidence interval: 1.26, 2.46). Incident OAD was associated with WTC exposure for at least 5 years after September 11, 2001. There were higher rates of new-onset OAD among the high-exposure group during the first 15 months and, to a lesser extent, throughout follow-up. This difference in relative rate by exposure occurred despite full and free access to health care for all WTC-exposed firefighters, demonstrating the persistence of WTC-associated OAD risk. PMID:24980522

  2. Confidant Relations of the Aged.

    ERIC Educational Resources Information Center

    Tigges, Leann M.; And Others

    The confidant relationship is a qualitatively distinct dimension of the emotional support system of the aged, yet the composition of the confidant network has been largely neglected in research on aging. Persons (N=940) 60 years of age and older were interviewed about their socio-environmental setting. From the enumeration of their relatives,…

  3. Sex-specific reference intervals of hematologic and biochemical analytes in Sprague-Dawley rats using the nonparametric rank percentile method.

    PubMed

    He, Qili; Su, Guoming; Liu, Keliang; Zhang, Fangcheng; Jiang, Yong; Gao, Jun; Liu, Lida; Jiang, Zhongren; Jin, Minwu; Xie, Huiping

    2017-01-01

    Hematologic and biochemical analytes of Sprague-Dawley rats are commonly used to determine effects that were induced by treatment and to evaluate organ dysfunction in toxicological safety assessments, but reference intervals have not been well established for these analytes. Reference intervals as presently defined for these analytes in Sprague-Dawley rats have not used internationally recommended statistical method nor stratified by sex. Thus, we aimed to establish sex-specific reference intervals for hematologic and biochemical parameters in Sprague-Dawley rats according to Clinical and Laboratory Standards Institute C28-A3 and American Society for Veterinary Clinical Pathology guideline. Hematology and biochemistry blood samples were collected from 500 healthy Sprague-Dawley rats (250 males and 250 females) in the control groups. We measured 24 hematologic analytes with the Sysmex XT-2100i analyzer, 9 biochemical analytes with the Olympus AU400 analyzer. We then determined statistically relevant sex partitions and calculated reference intervals, including corresponding 90% confidence intervals, using nonparametric rank percentile method. We observed that most hematologic and biochemical analytes of Sprague-Dawley rats were significantly influenced by sex. Males had higher hemoglobin, hematocrit, red blood cell count, red cell distribution width, mean corpuscular volume, mean corpuscular hemoglobin, white blood cell count, neutrophils, lymphocytes, monocytes, percentage of neutrophils, percentage of monocytes, alanine aminotransferase, aspartate aminotransferase, and triglycerides compared to females. Females had higher mean corpuscular hemoglobin concentration, plateletcrit, platelet count, eosinophils, percentage of lymphocytes, percentage of eosinophils, creatinine, glucose, total cholesterol and urea compared to males. Sex partition was required for most hematologic and biochemical analytes in Sprague-Dawley rats. We established sex-specific reference

  4. Association of cardiac implantable electronic devices with survival in bifascicular block and prolonged PR interval on electrocardiogram.

    PubMed

    Moulki, Naeem; Kealhofer, Jessica V; Benditt, David G; Gravely, Amy; Vakil, Kairav; Garcia, Santiago; Adabag, Selcuk

    2018-06-16

    Bifascicular block and prolonged PR interval on the electrocardiogram (ECG) have been associated with complete heart block and sudden cardiac death. We sought to determine if cardiac implantable electronic devices (CIED) improve survival in these patients. We assessed survival in relation to CIED status among 636 consecutive patients with bifascicular block and prolonged PR interval on the ECG. In survival analyses, CIED was considered as a time-varying covariate. Average age was 76 ± 9 years, and 99% of the patients were men. A total of 167 (26%) underwent CIED (127 pacemaker only) implantation at baseline (n = 23) or during follow-up (n = 144). During 5.4 ± 3.8 years of follow-up, 83 (13%) patients developed complete or high-degree atrioventricular block and 375 (59%) died. Patients with a CIED had a longer survival compared to those without a CIED in the traditional, static analysis (log-rank p < 0.0001) but not when CIED was considered as a time-varying covariate (log-rank p = 0.76). In the multivariable model, patients with a CIED had a 34% lower risk of death (hazard ratio 0.66, 95% confidence interval 0.52-0.83; p = 0.001) than those without CIED in the traditional analysis but not in the time-varying covariate analysis (hazard ratio 1.05, 95% confidence interval 0.79-1.38; p = 0.76). Results did not change in the subgroup with a pacemaker only. Bifascicular block and prolonged PR interval on ECG are associated with a high incidence of complete atrioventricular block and mortality. However, CIED implantation does not have a significant influence on survival when time-varying nature of CIED implantation is considered.

  5. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    PubMed Central

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  6. [The effect of esmolol on corrected-QT interval, corrected-QT interval dispersion changes seen during anesthesia induction in hypertensive patients taking an angiotensin-converting enzyme inhibitor].

    PubMed

    Ceker, Zahit; Takmaz, Suna Akın; Baltaci, Bülent; Başar, Hülya

    2015-01-01

    The importance of minimizing the exaggerated sympatho-adrenergic responses and QT interval and QT interval dispersion changes that may develop due to laryngoscopy and tracheal intubation during anesthesia induction in the hypertensive patients is clear. Esmolol decreases the hemodynamic response to laryngoscopy and intubation. However, the effect of esmolol in decreasing the prolonged QT interval and QT interval dispersion as induced by laryngoscopy and intubation is controversial. We investigated the effect of esmolol on the hemodynamic, and corrected-QT interval and corrected-QT interval dispersion changes seen during anesthesia induction in hypertensive patients using angiotensin converting enzyme inhibitors. 60 ASA I-II patients, with essential hypertension using angiotensin converting enzyme inhibitors were included in the study. The esmolol group received esmolol at a bolus dose of 500mcg/kg followed by a 100mcg/kg/min infusion which continued until the 4th min after intubation. The control group received 0.9% saline similar to the esmolol group. The mean blood pressure, heart rate values and the electrocardiogram records were obtained as baseline values before the anesthesia, 5min after esmolol and saline administration, 3min after the induction and 30s, 2min and 4min after intubation. The corrected-QT interval was shorter in the esmolol group (p=0.012), the corrected-QT interval dispersion interval was longer in the control group (p=0.034) and the mean heart rate was higher in the control group (p=0.022) 30s after intubation. The risk of arrhythmia frequency was higher in the control group in the 4-min period following intubation (p=0.038). Endotracheal intubation was found to prolong corrected-QT interval and corrected-QT interval dispersion, and increase the heart rate during anesthesia induction with propofol in hypertensive patients using angiotensin converting enzyme inhibitors. These effects were prevented with esmolol (500mcg/kg bolus, followed by

  7. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  8. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks.

    PubMed

    Hedne, Mikael R; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the "warmth ratings" previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.

  9. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    PubMed Central

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  10. Confidence Leak in Perceptual Decision Making.

    PubMed

    Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan

    2015-11-01

    People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.

  11. Effect of High Intensity Interval Training on Cardiac Function in Children with Obesity: A Randomised Controlled Trial.

    PubMed

    Ingul, Charlotte B; Dias, Katrin A; Tjonna, Arnt E; Follestad, Turid; Hosseini, Mansoureh S; Timilsina, Anita S; Hollekim-Strand, Siri M; Ro, Torstein B; Davies, Peter S W; Cain, Peter A; Leong, Gary M; Coombes, Jeff S

    2018-02-13

    High intensity interval training (HIIT) confers superior cardiovascular health benefits to moderate intensity continuous training (MICT) in adults and may be efficacious for improving diminished cardiac function in obese children. The aim of this study was to compare the effects of HIIT, MICT and nutrition advice interventions on resting left ventricular (LV) peak systolic tissue velocity (S') in obese children. Ninety-nine obese children were randomised into one of three 12-week interventions, 1) HIIT [n = 33, 4 × 4 min bouts at 85-95% maximum heart rate (HR max ), 3 times/week] and nutrition advice, 2) MICT [n = 32, 44 min at 60-70% HR max , 3 times/week] and nutrition advice, and 3) nutrition advice only (nutrition) [n = 34]. Twelve weeks of HIIT and MICT were equally efficacious, but superior to nutrition, for normalising resting LV S' in children with obesity (estimated mean difference 1.0 cm/s, 95% confidence interval 0.5 to 1.6 cm/s, P < 0.001; estimated mean difference 0.7 cm/s, 95% confidence interval 0.2 to 1.3 cm/s, P = 0.010, respectively). Twelve weeks of HIIT and MICT were superior to nutrition advice only for improving resting LV systolic function in obese children. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Extended International Normalized Ratio testing intervals for warfarin-treated patients.

    PubMed

    Barnes, G D; Kong, X; Cole, D; Haymart, B; Kline-Rogers, E; Almany, S; Dahu, M; Ekola, M; Kaatz, S; Kozlowski, J; Froehlich, J B

    2018-05-15

    Essentials Warfarin typically requires International Normalized Ratio (INR) testing at least every 4 weeks. We implemented extended INR testing for stable warfarin patients in six anticoagulation clinics. Use of extended INR testing increased from 41.8% to 69.3% over the 3 year study. Use of extended INR testing appeared safe and effective. Background A previous single-center randomized trial suggested that patients with stable International Normalized Ratio (INR) values could safely receive INR testing as infrequently as every 12 weeks. Objective To test the success of implementation of an extended INR testing interval for stable warfarin patients in a practice-based, multicenter collaborative of anticoagulation clinics. Methods At six anticoagulation clinics, patients were identified as being eligible for extended INR testing on the basis of prior INR value stability and minimal warfarin dose changes between 2014 and 2016. We assessed the frequency with which anticoagulation clinic providers recommended an extended INR testing interval (> 5 weeks) to eligible patients. We also explored safety outcomes for eligible patients, including next INR values, bleeding events, and emergency department visits. Results At least one eligible period for extended INR testing was identified in 890 of 3362 (26.5%) warfarin-treated patients. Overall, the use of extended INR testing in eligible patients increased from 41.8% in the first quarter of 2014 to 69.3% in the fourth quarter of 2016. The number of subsequent out-of-range next INR values were similar between eligible patients who did and did not have an extended INR testing interval (27.3% versus 28.4%, respectively). The numbers of major bleeding events were not different between the two groups, but rates of clinically relevant non-major bleeding (0.02 per 100 patient-years versus 0.09 per 100 patient-years) and emergency department visits (0.07 per 100 patient-years versus 0.19 per 100 patient-years) were lower for

  13. Predictive value and efficiency of laboratory testing.

    PubMed

    Galen, R S

    1980-11-01

    Literature on determining reference values and reference intervals on "normal" or "healthy" individuals is abundant. It is impossible, however, to evaluate a data set of reference values and select a suitable reference interval that will be meaningful for the practice of medicine. The reference interval, no matter how derived statistically, tells us nothing about disease. This is the main reason the concepts of "normal values" have failed us and why "reference values" will prove similarly disappointing. By studying these same constituents in a variety of disease states as well, it will be possible to select "referent values" that will make the test procedure meaningful for diagnostic purposes. In order to obtain meaningful referent values for predicting disease, it is necessary to study not only the "healthy" reference population, but patients with the disease in question, and patients who are free of the disease in question but who have other diseases. Studies of this type are not frequently found for laboratory tests that are in common use today.

  14. Impact of Increasing Inter-pregnancy Interval on Maternal and Infant Health

    PubMed Central

    Wendt, Amanda; Gibbs, Cassandra M.; Peters, Stacey; Hogue, Carol J.

    2015-01-01

    Short inter-pregnancy intervals (IPIs) have been associated with adverse maternal and infant health outcomes in the literature. However, many studies in this area have been lacking in quality and appropriate control for confounders known to be associated with both short IPIs and poor outcomes. The objective of this systematic review was to assess this relationship using more rigorous criteria, based on GRADE (Grading of Recommendations Assessment, Development and Evaluation) methodology. We found too few higher-quality studies of the impact of IPIs (measured as the time between the birth of a previous child and conception of the next child) on maternal health to reach conclusions about maternal nutrition, morbidity or mortality. However, the evidence for infant effects justified meta-analyses. We found significant impacts of short IPIs for extreme preterm birth [<6 m adjusted odds ratio (aOR): 1.58 [95% confidence interval (CI) 1.40, 1.78], 6–11 m aOR: 1.23 [1.03, 1.46

  15. Targeting Low Career Confidence Using the Career Planning Confidence Scale

    ERIC Educational Resources Information Center

    McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

    2006-01-01

    The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

  16. Confidence intervals for predicting lumber strength properties based on ratios of percentiles from two Weibull populations.

    Treesearch

    Richard A. Johnson; James W. Evans; David W. Green

    2003-01-01

    Ratios of strength properties of lumber are commonly used to calculate property values for standards. Although originally proposed in terms of means, ratios are being applied without regard to position in the distribution. It is now known that lumber strength properties are generally not normally distributed. Therefore, nonparametric methods are often used to derive...

  17. Forecast experiment: do temporal and spatial b value variations along the Calaveras fault portend M ≥ 4.0 earthquakes?

    USGS Publications Warehouse

    Parsons, Tom

    2007-01-01

    The power law distribution of earthquake magnitudes and frequencies is a fundamental scaling relationship used for forecasting. However, can its slope (b value) be used on individual faults as a stress indicator? Some have concluded that b values drop just before large shocks. Others suggested that temporally stable low b value zones identify future large-earthquake locations. This study assesses the frequency of b value anomalies portending M ≥ 4.0 shocks versus how often they do not. I investigated M ≥ 4.0 Calaveras fault earthquakes because there have been 25 over the 37-year duration of the instrumental catalog on the most active southern half of the fault. With that relatively large sample, I conducted retrospective time and space earthquake forecasts. I calculated temporal b value changes in 5-km-radius cylindrical volumes of crust that were significant at 90% confidence, but these changes were poor forecasters of M ≥ 4.0 earthquakes. M ≥ 4.0 events were as likely to happen at times of high b values as they were at low ones. However, I could not rule out a hypothesis that spatial b value anomalies portend M ≥ 4.0 events; of 20 M ≥ 4 shocks that could be studied, 6 to 8 (depending on calculation method) occurred where b values were significantly less than the spatial mean, 1 to 2 happened above the mean, and 10 to 13 occurred within 90% confidence intervals of the mean and were thus inconclusive. Thus spatial b value variation might be a useful forecast tool, but resolution is poor, even on seismically active faults.

  18. Potential confounding in the association between short birth intervals and increased neonatal, infant, and child mortality

    PubMed Central

    Perin, Jamie; Walker, Neff

    2015-01-01

    intervals less than 18 months, standard regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18–2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52–1.63), a decline of almost one-third in the effect on neonatal mortality. Conclusions Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality. PMID:26562139

  19. Potential confounding in the association between short birth intervals and increased neonatal, infant, and child mortality.

    PubMed

    Perin, Jamie; Walker, Neff

    2015-01-01

    regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18-2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52-1.63), a decline of almost one-third in the effect on neonatal mortality. Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality.

  20. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ying; Zhou, Zhi; Botterud, Audun

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less

  1. The 2009 Retirement Confidence Survey: economy drives confidence to record lows; many looking to work longer.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2009-04-01

    RECORD LOW CONFIDENCE LEVELS: Workers who say they are very confident about having enough money for a comfortable retirement this year hit the lowest level in 2009 (13 percent) since the Retirement Confidence Survey started asking the question in 1993, continuing a two-year decline. Retirees also posted a new low in confidence about having a financially secure retirement, with only 20 percent now saying they are very confident (down from 41 percent in 2007). THE ECONOMY, INFLATION, COST OF LIVING ARE THE BIG CONCERNS: Not surprisingly, workers overall who have lost confidence over the past year about affording a comfortable retirement most often cite the recent economic uncertainty, inflation, and the cost of living as primary factors. In addition, certain negative experiences, such as job loss or a pay cut, loss of retirement savings, or an increase in debt, almost always contribute to loss of confidence among those who experience them. RETIREMENT EXPECTATIONS DELAYED: Workers apparently expect to work longer because of the economic downturn: 28 percent of workers in the 2009 RCS say the age at which they expect to retire has changed in the past year. Of those, the vast majority (89 percent) say that they have postponed retirement with the intention of increasing their financial security. Nevertheless, the median (mid-point) worker expects to retire at age 65, with 21 percent planning to push on into their 70s. The median retiree actually retired at age 62, and 47 percent of retirees say they retired sooner than planned. WORKING IN RETIREMENT: More workers are also planning to supplement their income in retirement by working for pay. The percentage of workers planning to work after they retire has increased to 72 percent in 2009 (up from 66 percent in 2007). This compares with 34 percent of retirees who report they actually worked for pay at some time during their retirement. GREATER WORRY ABOUT BASIC AND HEALTH EXPENSES: Workers who say they very confident in

  2. Investigation of self-compassion, self-confidence and submissive behaviors of nursing students studying in different curriculums.

    PubMed

    Eraydın, Şahizer; Karagözoğlu, Şerife

    2017-07-01

    Today, nursing education which educates the future members of the nursing profession aims to gain them high self-esteem, selfconfidence and self-compassion, independence, assertiveness and ability to establish good human relations. This aim can only be achieved through a contemporary curriculum supporting students in the educational process and enabling those in charge to make arrangements by taking the characters and needs of each individual into account. The study aims to investigate self-compassion, self-confidence and submissive behaviours of undergraduate nursing students studying in different curriculums. This descriptive, cross-sectional, comparative study was carried out with the 1st- and 4th-year students of the three schools, each of which has a different curriculum: conventional, integrated and Problem Based Learning (PBL). The study data were collected with the Self-Compassion Scale (SCS), Self-Confidence Scale (CS) and Submissive Acts Scale (SAS): The data were analyzed through frequency distribution, means, analysis of variance and the significance test for the difference between the two means. The mean scores the participating students obtained from the Self-Compassion, Self-confidence and Submissive Acts Scales were 3.31±0.56, 131.98±20.85 and 36.48±11.43 respectively. The integrated program students' mean self-compassion and self-confidence scores were statistically significantly higher and their mean submissive behaviour scores were lower than were those of the students studying in the other two programs (p<0.05). The analysis of the correlation between the mean scores obtained from the scales revealed that there was a statistically significant relationships between the SCS and CS values (r=0.388, p<0.001), between the SCS and SAS values (r=-0307, p<0.001) and between the CS and SAS values (r=-0325, p<0.001). In line with the study results, it can be said that the participating nursing students tended to display moderate levels of

  3. Evaluation of CS (o-chlorobenzylidene malononitrile) concentrations during U.S. Army mask confidence training.

    PubMed

    Hout, Joseph J; Kluchinsky, Timothy; LaPuma, Peter T; White, Duvel W

    2011-10-01

    All soldiers in the U.S. Army are required to complete mask confidence training with o-chlorobenzylidene malononitrile (CS). To instill confidence in the protective capability of the military protective mask, CS is thermally dispersed in a room where soldiers wearing military protective masks are required to conduct various physical exercises, break the seal of their mask, speak, and remove their mask. Soldiers immediately feel the irritating effects of CS when the seal of the mask is broken, which reinforces the mask's ability to shield the soldier from airborne chemical hazards. In the study described in this article, the authors examined the CS concentration inside a mask confidence chamber operated in accordance with U.S. Army training guidelines. The daily average CS concentrations ranged from 2.33-3.29 mg/m3 and exceeded the threshold limit value ceiling, the recommended exposure limit ceiling, and the concentration deemed immediately dangerous to life and health. The minimum and maximum CS concentration used during mask confidence training should be evaluated.

  4. [Investigation of reference intervals of blood gas and acid-base analysis assays in China].

    PubMed

    Zhang, Lu; Wang, Wei; Wang, Zhiguo

    2015-10-01

    To investigate and analyze the upper and lower limits and their sources of reference intervals in blood gas and acid-base analysis assays. The data of reference intervals were collected, which come from the first run of 2014 External Quality Assessment (EQA) program in blood gas and acid-base analysis assays performed by National Center for Clinical Laboratories (NCCL). All the abnormal values and errors were eliminated. Data statistics was performed by SPSS 13.0 and Excel 2007 referring to upper and lower limits of reference intervals and sources of 7 blood gas and acid-base analysis assays, i.e. pH value, partial pressure of carbon dioxide (PCO2), partial pressure of oxygen (PO2), Na+, K+, Ca2+ and Cl-. Values were further grouped based on instrument system and the difference between each group were analyzed. There were 225 laboratories submitting the information on the reference intervals they had been using. The three main sources of reference intervals were National Guide to Clinical Laboratory Procedures [37.07% (400/1 079)], instructions of instrument manufactures [31.23% (337/1 079)] and instructions of reagent manufactures [23.26% (251/1 079)]. Approximately 35.1% (79/225) of the laboratories had validated the reference intervals they used. The difference of upper and lower limits in most assays among 7 laboratories was moderate, both minimum and maximum (i.e. the upper limits of pH value was 7.00-7.45, the lower limits of Na+ was 130.00-156.00 mmol/L), and mean and median (i.e. the upper limits of K+ was 5.04 mmol/L and 5.10 mmol/L, the upper limits of PCO2 was 45.65 mmHg and 45.00 mmHg, 1 mmHg = 0.133 kPa), as well as the difference in P2.5 and P97.5 between each instrument system group. It was shown by Kruskal-Wallis method that the P values of upper and lower limits of all the parameters were lower than 0.001, expecting the lower limits of Na+ with P value 0.029. It was shown by Mann-Whitney that the statistic differences were found among instrument

  5. The Role of Defensive Confidence in Preference for Proattitudinal Information: How Believing That One Is Strong Can Sometimes Be a Defensive Weakness

    PubMed Central

    Albarracín, Dolores; Mitchell, Amy L.

    2016-01-01

    This series of studies identified individuals who chronically believe that they can successfully defend their attitudes from external attack and investigated the consequences of this individual difference for selective exposure to attitude-incongruent information and, ultimately, attitude change. Studies 1 and 2 validated a measure of defensive confidence as an individual difference that is unidimensional, distinct from other personality measures, reliable over a 2-week interval, and organized as a trait that generalizes across various personal and social issues. Studies 3 and 4 provided evidence that defensive confidence decreases preference for proattitudinal information, therefore inducing greater reception of counterattitudinal materials. Study 5 demonstrated that people who are high in defensive confidence are more likely to change their attitudes as a result of exposure to counterattitudinal information and examined the perceptions that mediate this important phenomenon. PMID:15536240

  6. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

    PubMed Central

    2014-01-01

    Background Establishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears (39 males and 49 females) in Sweden. The animals were chemically immobilised by darting from a helicopter with a combination of medetomidine, tiletamine and zolazepam in April and May 2006–2012 in the county of Dalarna, Sweden. Venous blood samples were collected during anaesthesia for radio collaring and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown bears in Sweden. Results The following variables were not affected by host characteristics: red blood cell, white blood cell, monocyte and platelet count, alanine transaminase, amylase, bilirubin, free fatty acids, glucose, calcium, chloride, potassium, and cortisol. Age differences were seen for the majority of the haematological variables, whereas sex influenced only mean corpuscular haemoglobin concentration, aspartate aminotransferase, lipase, lactate dehydrogenase, β-globulin, bile acids, triglycerides and sodium. Conclusions The biochemical and haematological reference intervals provided and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears. PMID:25139149

  7. Engineering Student Self-Assessment through Confidence-Based Scoring

    ERIC Educational Resources Information Center

    Yuen-Reed, Gigi; Reed, Kyle B.

    2015-01-01

    A vital aspect of an answer is the confidence that goes along with it. Misstating the level of confidence one has in the answer can have devastating outcomes. However, confidence assessment is rarely emphasized during typical engineering education. The confidence-based scoring method described in this study encourages students to both think about…

  8. Clinician perceptions of personal safety and confidence to manage inpatient aggression in a forensic psychiatric setting.

    PubMed

    Martin, T; Daffern, M

    2006-02-01

    Inpatient mental health clinicians need to feel safe in the workplace. They also require confidence in their ability to work with aggressive patients, allowing the provision of therapeutic care while protecting themselves and other patients from psychological and physical harm. The authors initiated this study with the predetermined belief that a comprehensive and integrated organizational approach to inpatient aggression was required to support clinicians and that this approach increased confidence and staff perceptions of personal safety. To assess perceptions of personal safety and confidence, clinicians in a forensic psychiatric hospital were surveyed using an adapted version of the Confidence in Coping With Patient Aggression Instrument. In this study clinicians reported the hospital as safe. They reported confidence in their work with aggressive patients. The factors that most impacted on clinicians' confidence to manage aggression were colleagues' knowledge, experience and skill, management of aggression training, use of prevention and intervention strategies, teamwork and the staff profile. These results are considered with reference to an expanding literature on inpatient aggression. It is concluded that organizational resources, policies and frameworks support clinician perceptions of safety and confidence to manage inpatient aggression. However, how these are valued by clinicians and translated into practice at unit level needs ongoing attention.

  9. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    ERIC Educational Resources Information Center

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  10. Reporting numeric values of complete crowns. Part 1: Clinical preparation parameters.

    PubMed

    Tiu, Janine; Al-Amleh, Basil; Waddell, J Neil; Duncan, Warwick J

    2015-07-01

    An implemented objective measuring system for measuring clinical tooth preparations does not exist. The purpose of this study was to compare clinically achieved tooth preparations for ceramic crowns by general dentists with the recommended values in the literature with an objective measuring method. Two hundred thirty-six stone dies prepared for anterior and posterior complete ceramic crown restorations (IPS e.max Press; Ivoclar Vivadent) were collected from dental laboratories. The dies were scanned and analyzed using the coordinate geometry method. Cross-sectioned images were captured, and the average total occlusal convergence angle, margin width, and abutment height for each preparation was measured and presented with associated 95% confidence intervals. The average total occlusal convergence angles for each tooth type was above the recommended values reported in the literature. The average margin widths (0.40 to 0.83 mm) were below the minimum recommended values (1 to 1.5 mm). The tallest preparations were maxillary canines (5.25 mm), while the shortest preparations were mandibular molars (1.87 mm). Complete crown preparations produced in general practice do not achieve the recommended values found in the literature. However, these recommended values are not based on clinical trials, and the effects of observed shortfalls on the clinical longevity of these restorations are not predictable. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  11. Worse than enemies. The CEO's destructive confidant.

    PubMed

    Sulkowicz, Kerry J

    2004-02-01

    The CEO is often the most isolated and protected employee in the organization. Few leaders, even veteran CEOs, can do the job without talking to someone about their experiences, which is why most develop a close relationship with a trusted colleague, a confidant to whom they can tell their thoughts and fears. In his work with leaders, the author has found that many CEO-confidant relationships function very well. The confidants keep their leaders' best interests at heart. They derive their gratification vicariously, through the help they provide rather than through any personal gain, and they are usually quite aware that a person in their position can potentially abuse access to the CEO's innermost secrets. Unfortunately, almost as many confidants will end up hurting, undermining, or otherwise exploiting CEOs when the executives are at their most vulnerable. These confidants rarely make the headlines, but behind the scenes they do enormous damage to the CEO and to the organization as a whole. What's more, the leader is often the last one to know when or how the confidant relationship became toxic. The author has identified three types of destructive confidants. The reflector mirrors the CEO, constantly reassuring him that he is the "fairest CEO of them all." The insulator buffers the CEO from the organization, preventing critical information from getting in or out. And the usurper cunningly ingratiates himself with the CEO in a desperate bid for power. This article explores how the CEO-confidant relationship plays out with each type of adviser and suggests ways CEOs can avoid these destructive relationships.

  12. How to Fire a President: Voting "No Confidence" with Confidence

    ERIC Educational Resources Information Center

    Schmidt, Peter

    2009-01-01

    College faculties often use votes of "no confidence" to try to push out the leaders of their institutions. Many do so, however, without giving much thought to what such a vote actually means, whether they are using it appropriately, or how it will affect their campus--and their own future. Mae Kuykendall, a professor of law at Michigan State…

  13. The Diagnostic Value of Gastrin-17 Detection in Atrophic Gastritis

    PubMed Central

    Wang, Xu; Ling, Li; Li, Shanshan; Qin, Guiping; Cui, Wei; Li, Xiang; Ni, Hong

    2016-01-01

    Abstract A meta-analysis was performed to assess the diagnostic value of gastrin-17 (G-17) for the early detection of chronic atrophic gastritis (CAG). An extensive literature search was performed, with the aim of selecting publications that reported the accuracy of G-17 in predicting CAG, in the following databases: PubMed, Science Direct, Web of Science, Chinese Biological Medicine, Chinese National Knowledge Infrastructure, Wanfang, and VIP. To assess the diagnostic value of G-17, the following statistics were estimated and described: sensitivity, specificity, diagnostic odds ratios (DOR), summary receiver operating characteristic curves, area under the curve (AUC), and 95% confidence intervals (CIs). Thirteen studies that met the inclusion criteria were included in this meta-analysis, comprising 894 patients and 1950 controls. The pooled sensitivity and specificity of these studies were 0.48 (95% CI: 0.45–0.51) and 0.79 (95% CI: 0.77–0.81), respectively. The DOR was 5.93 (95% CI: 2.93–11.99), and the AUC was 0.82. G-17 may have potential diagnostic value because it has good specificity and a moderate DOR and AUC for CAG. However, more studies are needed to improve the sensitivity of this diagnostic tool in the future. PMID:27149493

  14. Public Loss of Confidence in the U.S. Government: Implications for Higher Education.

    ERIC Educational Resources Information Center

    Bogler, Ronit

    The unsatisfactory status of higher education in the United States has many explanations, such as the declining value of scholarship and academic ethos and the neglect of teaching obligations in favor of research duties. This paper posits another theory for the skepticism toward academic institutions: the general loss of confidence of the American…

  15. Programming with Intervals

    NASA Astrophysics Data System (ADS)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  16. Confidence as Bayesian Probability: From Neural Origins to Behavior.

    PubMed

    Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F

    2015-10-07

    Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. The Behavioral Economics of Choice and Interval Timing

    ERIC Educational Resources Information Center

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    The authors propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with…

  18. Comparison of alternative MS/MS and bioinformatics approaches for confident phosphorylation site localization.

    PubMed

    Wiese, Heike; Kuhlmann, Katja; Wiese, Sebastian; Stoepel, Nadine S; Pawlas, Magdalena; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin; Drepper, Friedel; Warscheid, Bettina

    2014-02-07

    Over the past years, phosphoproteomics has advanced to a prime tool in signaling research. Since then, an enormous amount of information about in vivo protein phosphorylation events has been collected providing a treasure trove for gaining a better understanding of the molecular processes involved in cell signaling. Yet, we still face the problem of how to achieve correct modification site localization. Here we use alternative fragmentation and different bioinformatics approaches for the identification and confident localization of phosphorylation sites. Phosphopeptide-enriched fractions were analyzed by multistage activation, collision-induced dissociation and electron transfer dissociation (ETD), yielding complementary phosphopeptide identifications. We further found that MASCOT, OMSSA and Andromeda each identified a distinct set of phosphopeptides allowing the number of site assignments to be increased. The postsearch engine SLoMo provided confident phosphorylation site localization, whereas different versions of PTM-Score integrated in MaxQuant differed in performance. Based on high-resolution ETD and higher collisional dissociation (HCD) data sets from a large synthetic peptide and phosphopeptide reference library reported by Marx et al. [Nat. Biotechnol. 2013, 31 (6), 557-564], we show that an Andromeda/PTM-Score probability of 1 is required to provide an false localization rate (FLR) of 1% for HCD data, while 0.55 is sufficient for high-resolution ETD spectra. Additional analyses of HCD data demonstrated that for phosphotyrosine peptides and phosphopeptides containing two potential phosphorylation sites, PTM-Score probability cutoff values of <1 can be applied to ensure an FLR of 1%. Proper adjustment of localization probability cutoffs allowed us to significantly increase the number of confident sites with an FLR of <1%.Our findings underscore the need for the systematic assessment of FLRs for different score values to report confident modification site

  19. [Predictive value of breast imaging report and database system (BIRADS) to detect cancer in a reference regional hospital].

    PubMed

    Bellolio, Enrique; Pineda, Viviana; Burgos, María Eugenia; Iriarte, M José; Becker, Renato; Araya, Juan Carlos; Villaseca, Miguel; Mardones, Noldy

    2015-12-01

    To validate the BIRADS in mammography, the calculation of its predictive value in each center is required, as recommended by the American College of Radiology. To determine the predictive value of the BIRADS system in our center. All ultrasound guided needle percutaneous biopsies, performed at our center between 2006 and 2010 were reviewed. Predictive value, sensitivity, specificity and diagnostic accuracy of BIRADS were calculated, with a confidence interval of 95%. Of 1,313 biopsies available, 1,058 met the inclusion criteria. Fifty eight percent of biopsies were performed to women with mammographies classified as BIRADS 4 or 5. The presence of cancer in mammographies classified as BIRADS 0 was 4%. The prevalence of cancer for mammographies BIRADS 1, 2, 3, 4 and 5 were 0, 3, 2.7, 17.7 and 72.4% respectively. The positive and negative predictive values of BIRADS classification were 55 and 92 % respectively. In our institution BIRADS classification 4 and 5 has a high positive predictive value for detecting cancer as in developed countries.

  20. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  1. Integration of multiple biological features yields high confidence human protein interactome.

    PubMed

    Karagoz, Kubra; Sevimoglu, Tuba; Arga, Kazim Yalcin

    2016-08-21

    The biological function of a protein is usually determined by its physical interaction with other proteins. Protein-protein interactions (PPIs) are identified through various experimental methods and are stored in curated databases. The noisiness of the existing PPI data is evident, and it is essential that a more reliable data is generated. Furthermore, the selection of a set of PPIs at different confidence levels might be necessary for many studies. Although different methodologies were introduced to evaluate the confidence scores for binary interactions, a highly reliable, almost complete PPI network of Homo sapiens is not proposed yet. The quality and coverage of human protein interactome need to be improved to be used in various disciplines, especially in biomedicine. In the present work, we propose an unsupervised statistical approach to assign confidence scores to PPIs of H. sapiens. To achieve this goal PPI data from six different databases were collected and a total of 295,288 non-redundant interactions between 15,950 proteins were acquired. The present scoring system included the context information that was assigned to PPIs derived from eight biological attributes. A high confidence network, which included 147,923 binary interactions between 13,213 proteins, had scores greater than the cutoff value of 0.80, for which sensitivity, specificity, and coverage were 94.5%, 80.9%, and 82.8%, respectively. We compared the present scoring method with others for evaluation. Reducing the noise inherent in experimental PPIs via our scoring scheme increased the accuracy significantly. As it was demonstrated through the assessment of process and cancer subnetworks, this study allows researchers to construct and analyze context-specific networks via valid PPI sets and one can easily achieve subnetworks around proteins of interest at a specified confidence level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Validation of the Vaccination Confidence Scale: A brief measure to identify parents at risk for refusing adolescent vaccines

    PubMed Central

    Reiter, Paul L.; Magnus, Brooke E.; McRee, Annie-Laurie; Dempsey, Amanda F.; Brewer, Noel T.

    2015-01-01

    Objective To support efforts to address vaccine hesitancy, we sought to validate a brief measure of vaccination confidence using a large, nationally representative sample of parents. Methods We analyzed weighted data from 9,018 parents who completed the 2010 National Immunization Survey-Teen, an annual, population-based telephone survey. Parents reported on the immunization history of a 13- to 17-year-old child in their households for vaccines including tetanus, diphtheria, and acellular pertussis (Tdap), meningococcal, and human papillomavirus (HPV) vaccines. For each vaccine, separate logistic regression models assessed associations between parents’ mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. We repeated analyses for the scale’s 4-item short form. Results One quarter of parents (24%) reported refusal of any vaccine, with refusal of specific vaccines ranging from 21% for HPV to 2% for Tdap. Using the full 8-item scale, vaccination confidence was negatively associated with measures of vaccine refusal and positively associated with measures of vaccination status. For example, refusal of any vaccine was more common among parents whose scale scores were medium (odds ratio [OR] = 2.08, 95% confidence interval [CI], 1.75–2.47) or low (OR = 4.61, 95% CI, 3.51–6.05) versus high. For the 4-item short form, scores were also consistently associated with vaccine refusal and vaccination status. Vaccination confidence was inconsistently associated with vaccine delay. Conclusions The Vaccination Confidence Scale shows promise as a tool for identifying parents at risk for refusing adolescent vaccines. The scale’s short form appears to offer comparable performance. PMID:26300368

  3. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  4. The effectiveness of collaborative problem based physics learning (CPBPL) model to improve student’s self-confidence on physics learning

    NASA Astrophysics Data System (ADS)

    Prahani, B. K.; Suprapto, N.; Suliyanah; Lestari, N. A.; Jauhariyah, M. N. R.; Admoko, S.; Wahyuni, S.

    2018-03-01

    In the previous research, Collaborative Problem Based Physic Learning (CPBPL) model has been developed to improve student’s science process skills, collaborative problem solving, and self-confidence on physics learning. This research is aimed to analyze the effectiveness of CPBPL model towards the improvement of student’s self-confidence on physics learning. This research implemented quasi experimental design on 140 senior high school students who were divided into 4 groups. Data collection was conducted through questionnaire, observation, and interview. Self-confidence measurement was conducted through Self-Confidence Evaluation Sheet (SCES). The data was analyzed using Wilcoxon test, n-gain, and Kruskal Wallis test. Result shows that: (1) There is a significant score improvement on student’s self-confidence on physics learning (α=5%), (2) n-gain value student’s self-confidence on physics learning is high, and (3) n-gain average student’s self-confidence on physics learning was consistent throughout all groups. It can be concluded that CPBPL model is effective to improve student’s self-confidence on physics learning.

  5. The thresholds for statistical and clinical significance – a five-step procedure for evaluation of intervention effects in randomised clinical trials

    PubMed Central

    2014-01-01

    Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900

  6. Frequency and Determinants of a Short-Interval Follow-up Recommendation After an Abnormal Screening Mammogram.

    PubMed

    Pelletier, Eric; Daigle, Jean-Marc; Defay, Fannie; Major, Diane; Guertin, Marie-Hélène; Brisson, Jacques

    2016-11-01

    After imaging assessment of an abnormal screening mammogram, a follow-up examination 6 months later is recommended to some women. Our aim was to identify which characteristics of lesions, women, and physicians are associated to such short-interval follow-up recommendation in the Quebec Breast Cancer Screening Program. Between 1998 and 2008, 1,839,396 screening mammograms were performed and a total of 114,781 abnormal screens were assessed by imaging only. Multivariate analysis was done with multilevel Poisson regression models with robust variance and generalized linear mixed models. A short-interval follow-up was recommended in 26.7% of assessments with imaging only, representing 2.3% of all screens. Case-mix adjusted proportion of short-interval follow-up recommendations varied substantially across physicians (range: 4%-64%). Radiologists with high recall rates (≥15%) had a high proportion of short-interval follow-up recommendation (risk ratio: 1.82; 95% confidence interval: 1.35-2.45) compared to radiologists with low recall rates (<5%). The adjusted proportion of short-interval follow-up was high (22.8%) even when a previous mammogram was usually available. Short-interval follow-up recommendation at assessment is frequent in this Canadian screening program, even when a previous mammogram is available. Characteristics related to radiologists appear to be key determinants of short-interval follow-up recommendation, rather than characteristics of lesions or patient mix. Given that it can cause anxiety to women and adds pressure on the health system, it appears important to record and report short-interval follow-up and to identify ways to reduce its frequency. Short-interval follow-up recommendations should be considered when assessing the burden of mammography screening. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  7. Sustaining Vaccine Confidence in the 21st Century

    PubMed Central

    Hardt, Karin; Schmidt-Ott, Ruprecht; Glismann, Steffen; Adegbola, Richard A.; Meurice, François P.

    2013-01-01

    Vaccination provides many health and economic benefits to individuals and society, and public support for immunization programs is generally high. However, the benefits of vaccines are often not fully valued when public discussions on vaccine safety, quality or efficacy arise, and the spread of misinformation via the internet and other media has the potential to undermine immunization programs. Factors associated with improved public confidence in vaccines include evidence-based decision-making procedures and recommendations, controlled processes for licensing and monitoring vaccine safety and effectiveness and disease surveillance. Community engagement with appropriate communication approaches for each audience is a key factor in building trust in vaccines. Vaccine safety/quality issues should be handled rapidly and transparently by informing and involving those most affected and those concerned with public health in effective ways. Openness and transparency in the exchange of information between industry and other stakeholders is also important. To maximize the safety of vaccines, and thus sustain trust in vaccines, partnerships are needed between public health sector stakeholders. Vaccine confidence can be improved through collaborations that ensure high vaccine uptake rates and that inform the public and other stakeholders of the benefits of vaccines and how vaccine safety is constantly assessed, assured and communicated. PMID:26344109

  8. Confidence in outcome estimates from systematic reviews used in informed consent.

    PubMed

    Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette

    2016-12-01

    Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding

  9. Communication confidence in persons with aphasia.

    PubMed

    Babbitt, Edna M; Cherney, Leora R

    2010-01-01

    Communication confidence is a construct that has not been explored in the aphasia literature. Recently, national and international organizations have endorsed broader assessment methods that address quality of life and include participation, activity, and impairment domains as well as psychosocial areas. Individuals with aphasia encounter difficulties in all these areas on a daily basis in living with a communication disorder. Improvements are often reflected in narratives that are not typically included in standard assessments. This article illustrates how a new instrument measuring communication confidence might fit into a broad assessment framework and discusses the interaction of communication confidence, autonomy, and self-determination for individuals living with aphasia.

  10. System implications of the ambulance arrival-to-patient contact interval on response interval compliance.

    PubMed

    Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A

    1994-01-01

    In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.

  11. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  12. Confidence and Competence with Mathematical Procedures

    ERIC Educational Resources Information Center

    Foster, Colin

    2016-01-01

    Confidence assessment (CA), in which students state alongside each of their answers a confidence level expressing how certain they are, has been employed successfully within higher education. However, it has not been widely explored with school pupils. This study examined how school mathematics pupils (N?=?345) in five different secondary schools…

  13. "Yes, we can!" review on team confidence in sports.

    PubMed

    Fransen, Katrien; Mertens, Niels; Feltz, Deborah; Boen, Filip

    2017-08-01

    During the last decade, team confidence has received more and more attention in the sport psychology literature. Research has demonstrated that athletes who are more confident in their team's abilities exert more effort, set more challenging goals, are more resilient when facing adversities, and ultimately perform better. This article reviews the existing literature in order to provide more clarity in terms of the conceptualization and the operationalization of team confidence. We thereby distinguish between collective efficacy (i.e., process-oriented team confidence) and team outcome confidence (i.e., outcome-oriented team confidence). In addition, both the sources as well as the outcomes of team confidence will be discussed. Furthermore, we will go deeper into the dispersion of team confidence and we will evaluate the current guidelines on how to measure both types of team confidence. Building upon this base, the article then highlights interesting avenues for future research in order to further improve both our theoretical knowledge on team confidence and its application to the field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. The confidence in health care and social services in northern Sweden--a comparison between reindeer-herding Sami and the non-Sami majority population.

    PubMed

    Daerga, Laila; Sjölander, Per; Jacobsson, Lars; Edin-Liljegren, Anette

    2012-08-01

    To investigate the confidence in primary health care, psychiatry and social services among the reindeer-herding Sami and the non-Sami population of northern Sweden. A semi-randomized, cross-sectional study design comprising 325 reindeer-herding Sami (171 men, 154 women) and a control population of 1,437 non-Sami (684 men, 753 women). A questionnaire on the confidence in primary health care, psychiatry, social services, and work colleagues was distributed to members of reindeer-herding families through the Sami communities and to the control population through the post. The relative risk for poor confidence was analyzed by calculating odds ratios with 95% confidence intervals adjusted for age and level of education. The confidence in primary health care and psychiatry was significantly lower among the reindeer-herding Sami compared with the control group. No differences were found between men and women in the reindeer-herding Sami population. In both the reindeer-herding Sami and the control population, younger people (≤ 48 years) reported significantly lower confidence in primary health care than older individuals (>48 years). A conceivable reason for the poor confidence in health care organizations reported by the reindeer-herding Sami is that they experience health care staff as poorly informed about reindeer husbandry and Sami culture, resulting in unsuitable or unrealistic treatment suggestions. The findings suggest that the poor confidence constitutes a significant obstacle of the reindeer-herding Sami to fully benefit from public health care services.

  15. Interval Training.

    ERIC Educational Resources Information Center

    President's Council on Physical Fitness and Sports, Washington, DC.

    Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…

  16. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    NASA Astrophysics Data System (ADS)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  17. [Establishing biological reference intervals of alanine transaminase for clinical laboratory stored database].

    PubMed

    Guo, Wei; Song, Binbin; Shen, Junfei; Wu, Jiong; Zhang, Chunyan; Wang, Beili; Pan, Baishen

    2015-08-25

    To establish an indirect reference interval based on the test results of alanine aminotransferase stored in a laboratory information system. All alanine aminotransferase results were included for outpatients and physical examinations that were stored in the laboratory information system of Zhongshan Hospital during 2014. The original data were transformed using a Box-Cox transformation to obtain an approximate normal distribution. Outliers were identified and omitted using the Chauvenet and Tukey methods. The indirect reference intervals were obtained by simultaneously applying nonparametric and Hoffmann methods. The reference change value was selected to determine the statistical significance of the observed differences between the calculated and published reference intervals. The indirect reference intervals for alanine aminotransferase of all groups were 12 to 41 U/L (male, outpatient), 12 to 48 U/L (male, physical examination), 9 to 32 U/L (female, outpatient), and 8 to 35 U/L (female, physical examination), respectively. The absolute differences when compared with the direct results were all smaller than the reference change value of alanine aminotransferase. The Box-Cox transformation combined with the Hoffmann and Tukey methods is a simple and reliable technique that should be promoted and used by clinical laboratories.

  18. Sex differences in confidence influence patterns of conformity.

    PubMed

    Cross, Catharine P; Brown, Gillian R; Morgan, Thomas J H; Laland, Kevin N

    2017-11-01

    Lack of confidence in one's own ability can increase the likelihood of relying on social information. Sex differences in confidence have been extensively investigated in cognitive tasks, but implications for conformity have not been directly tested. Here, we tested the hypothesis that, in a task that shows sex differences in confidence, an indirect effect of sex on social information use will also be evident. Participants (N = 168) were administered a mental rotation (MR) task or a letter transformation (LT) task. After providing an answer, participants reported their confidence before seeing the responses of demonstrators and being allowed to change their initial answer. In the MR, but not the LT, task, women showed lower levels of confidence than men, and confidence mediated an indirect effect of sex on the likelihood of switching answers. These results provide novel, experimental evidence that confidence is a general explanatory mechanism underpinning susceptibility to social influences. Our results have implications for the interpretation of the wider literature on sex differences in conformity. © 2016 The British Psychological Society.

  19. Military Applicability of Interval Training for Health and Performance.

    PubMed

    Gibala, Martin J; Gagnon, Patrick J; Nindl, Bradley C

    2015-11-01

    Militaries from around the globe have predominantly used endurance training as their primary mode of aerobic physical conditioning, with historical emphasis placed on the long distance run. In contrast to this traditional exercise approach to training, interval training is characterized by brief, intermittent bouts of intense exercise, separated by periods of lower intensity exercise or rest for recovery. Although hardly a novel concept, research over the past decade has shed new light on the potency of interval training to elicit physiological adaptations in a time-efficient manner. This work has largely focused on the benefits of low-volume interval training, which involves a relatively small total amount of exercise, as compared with the traditional high-volume approach to training historically favored by militaries. Studies that have directly compared interval and moderate-intensity continuous training have shown similar improvements in cardiorespiratory fitness and the capacity for aerobic energy metabolism, despite large differences in total exercise and training time commitment. Interval training can also be applied in a calisthenics manner to improve cardiorespiratory fitness and strength, and this approach could easily be incorporated into a military conditioning environment. Although interval training can elicit physiological changes in men and women, the potential for sex-specific adaptations in the adaptive response to interval training warrants further investigation. Additional work is needed to clarify adaptations occurring over the longer term; however, interval training deserves consideration from a military applicability standpoint as a time-efficient training strategy to enhance soldier health and performance. There is value for military leaders in identifying strategies that reduce the time required for exercise, but nonetheless provide an effective training stimulus.

  20. The estimation of parameter compaction values for pavement subgrade stabilized with lime

    NASA Astrophysics Data System (ADS)

    Lubis, A. S.; Muis, Z. A.; Simbolon, C. A.

    2018-02-01

    The type of soil material, field control, maintenance and availability of funds are several factors that must be considered in compaction of the pavement subgrade. In determining the compaction parameters in laboratory desperately requires considerable materials, time and funds, and reliable laboratory operators. If the result of soil classification values can be used to estimate the compaction parameters of a subgrade material, so it would save time, energy, materials and cost on the execution of this work. This is also a clarification (cross check) of the work that has been done by technicians in the laboratory. The study aims to estimate the compaction parameter values ie. maximum dry unit weight (γdmax) and optimum water content (Wopt) of the soil subgrade that stabilized with lime. The tests that conducted in the laboratory of soil mechanics were to determine the index properties (Fines and Liquid Limit/LL) and Standard Compaction Test. Soil samples that have Plasticity Index (PI) > 10% were made with additional 3% lime for 30 samples. By using the Goswami equation, the compaction parameter values can be estimated by equation γd max # = -0,1686 Log G + 1,8434 and Wopt # = 2,9178 log G + 17,086. From the validation calculation, there was a significant positive correlation between the compaction parameter values laboratory and the compaction parameter values estimated, with a 95% confidence interval as a strong relationship.