NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models
Bootstrapping Confidence Intervals for Robust Measures of Association.
ERIC Educational Resources Information Center
King, Jason E.
A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…
Evaluation of confidence intervals for a steady-state leaky aquifer model
Christensen, S.; Cooley, R.L.
1999-01-01
The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence
ERIC Educational Resources Information Center
Du, Yunfei
This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Explorations in Statistics: Confidence Intervals
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…
Minimax confidence intervals in geomagnetism
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
Methods for the accurate estimation of confidence intervals on protein folding ϕ-values
Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.
2006-01-01
ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.
Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica
2014-05-01
The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.
Interpretation of Confidence Interval Facing the Conflict
ERIC Educational Resources Information Center
Andrade, Luisa; Fernández, Felipe
2016-01-01
As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…
ERIC Educational Resources Information Center
Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.
2012-01-01
The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…
Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran
2018-06-22
Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.
Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling
ERIC Educational Resources Information Center
Banjanovic, Erin S.; Osborne, Jason W.
2016-01-01
Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…
Confidence intervals from single observations in forest research
Harry T. Valentine; George M. Furnival; Timothy G. Gregoire
1991-01-01
A procedure for constructing confidence intervals and testing hypothese from a single trial or observation is reviewed. The procedure requires a prior, fixed estimate or guess of the outcome of an experiment or sampling. Two examples of applications are described: a confidence interval is constructed for the expected outcome of a systematic sampling of a forested tract...
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies
Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.
Erdoğan, Semra; Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.
Graphing within-subjects confidence intervals using SPSS and S-Plus.
Wright, Daniel B
2007-02-01
Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.
Confidence Intervals for Error Rates Observed in Coded Communications Systems
NASA Astrophysics Data System (ADS)
Hamkins, J.
2015-05-01
We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.
Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.
Lee, Sunbok; Lei, Man-Kit; Brody, Gene H
2015-06-01
Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick
2009-06-01
When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.
Improved central confidence intervals for the ratio of Poisson means
NASA Astrophysics Data System (ADS)
Cousins, R. D.
The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.
On Some Confidence Intervals for Estimating the Mean of a Skewed Population
ERIC Educational Resources Information Center
Shi, W.; Kibria, B. M. Golam
2007-01-01
A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…
ERIC Educational Resources Information Center
Weber, Deborah A.
Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…
Modified Confidence Intervals for the Mean of an Autoregressive Process.
1985-08-01
Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff
2012-01-01
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
Shieh, G
2013-12-01
The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.
An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.
ERIC Educational Resources Information Center
Capraro, Mary Margaret
This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.
Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Using an R Shiny to Enhance the Learning Experience of Confidence Intervals
ERIC Educational Resources Information Center
Williams, Immanuel James; Williams, Kelley Kim
2018-01-01
Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…
Coefficient Alpha Bootstrap Confidence Interval under Nonnormality
ERIC Educational Resources Information Center
Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew
2012-01-01
Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…
Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions
ERIC Educational Resources Information Center
Padilla, Miguel A.; Divers, Jasmin
2013-01-01
The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…
Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan
2012-01-01
It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.
Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan
2012-01-01
It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment, we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence. PMID:22171810
Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap
ERIC Educational Resources Information Center
Calzada, Maria E.; Gardner, Holly
2011-01-01
The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10,…
Toward Using Confidence Intervals to Compare Correlations
ERIC Educational Resources Information Center
Zou, Guang Yong
2007-01-01
Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…
The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.
Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail
2017-06-01
There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input
ERIC Educational Resources Information Center
Strazzeri, Kenneth Charles
2013-01-01
The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
Quantifying uncertainty on sediment loads using bootstrap confidence intervals
NASA Astrophysics Data System (ADS)
Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg
2017-01-01
Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.
Confidence intervals for expected moments algorithm flood quantile estimates
Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.
2001-01-01
Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.
Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R
ERIC Educational Resources Information Center
Dogan, C. Deha
2017-01-01
Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…
Oono, Ryoko
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.
Confidence Interval Coverage for Cohen's Effect Size Statistic
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2006-01-01
Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…
Using Asymptotic Results to Obtain a Confidence Interval for the Population Median
ERIC Educational Resources Information Center
Jamshidian, M.; Khatoonabadi, M.
2007-01-01
Almost all introductory and intermediate level statistics textbooks include the topic of confidence interval for the population mean. Almost all these texts introduce the median as a robust measure of central tendency. Only a few of these books, however, cover inference on the population median and in particular confidence interval for the median.…
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Improved confidence intervals when the sample is counted an integer times longer than the blank.
Potter, William Edward; Strzelczyk, Jadwiga Jodi
2011-05-01
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.
Confidence Intervals for True Scores Using the Skew-Normal Distribution
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Confidence Intervals from Realizations of Simulated Nuclear Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.; Ratkiewicz, A.; Ressler, J. J.
2017-09-28
Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889
Likelihood-Based Confidence Intervals in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Oort, Frans J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…
Fung, Tak; Keenan, Kevin
2014-01-01
The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
Four Bootstrap Confidence Intervals for the Binomial-Error Model.
ERIC Educational Resources Information Center
Lin, Miao-Hsiang; Hsiung, Chao A.
1992-01-01
Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2012-01-01
Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…
Confidence intervals for a difference between lognormal means in cluster randomization trials.
Poirier, Julia; Zou, G Y; Koval, John
2017-04-01
Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in
Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.
2010-01-01
Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21
ERIC Educational Resources Information Center
Oranje, Andreas
2006-01-01
Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…
ERIC Educational Resources Information Center
Tryon, Warren W.; Lewis, Charles
2009-01-01
Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
NASA Astrophysics Data System (ADS)
Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert
2017-03-01
The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
Carnegie, Nicole Bohme
2011-04-15
The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.
Robust misinterpretation of confidence intervals.
Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan
2014-10-01
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.
ERIC Educational Resources Information Center
Barnette, J. Jackson
2005-01-01
An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…
Donald B.K. English
2000-01-01
In this paper I use bootstrap procedures to develop confidence intervals for estimates of total industrial output generated per thousand tourist visits. Mean expenditures from replicated visitor expenditure data included weights to correct for response bias. Impacts were estimated with IMPLAN. Ninety percent interval endpoints were 6 to 16 percent above or below the...
Lai, Keke; Kelley, Ken
2011-06-01
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association
Cooley, Richard L.
1993-01-01
A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.
Teach a Confidence Interval for the Median in the First Statistics Course
ERIC Educational Resources Information Center
Howington, Eric B.
2017-01-01
Few introductory statistics courses consider statistical inference for the median. This article argues in favour of adding a confidence interval for the median to the first statistics course. Several methods suitable for introductory statistics students are identified and briefly reviewed.
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
Jackson, Dan; Bowden, Jack
2016-09-07
Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.
2011-01-01
Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, as this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referred to as Statistical Tools for AMT tag Confidence (STAC). STAC additionally provides a Uniqueness Probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download as both a command line and a Windows graphical application. PMID:21692516
Procedures for estimating confidence intervals for selected method performance parameters.
McClure, F D; Lee, J K
2001-01-01
Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.
NASA Astrophysics Data System (ADS)
Ulizio, T. P.; Bilbrey, C.; Stoyanoff, N.; Dixon, J. L.
2015-12-01
Mass wasting events are geologic hazards that impact human life and property across a variety of landscapes. These movements can be triggered by tectonic activity, anomalous precipitation events, or both; acting to decrease the factor of safety ratio on a hillslope to the point of failure. There exists an active hazard landscape in the West Boulder River drainage of Park Co., MT in which the mechanisms of slope failure are unknown. It is known that region has not seen significant tectonic activity within the last decade, leaving anomalous precipitation events as the likely trigger for slope failures in the landscape. Precipitation can be delivered to a landscape via rainfall or snow; it was the aim of this study to determine the precipitation delivery mechanism most likely responsible for movements in the West Boulder drainage following the Jungle Wildfire of 2006. Data was compiled from four SNOTEL sites in the surrounding area, spanning 33 years, focusing on, but not limited to; maximum snow water equivalent (SWE) values in a water year, median SWE values on the date which maximum SWE was recorded in a water year, the total precipitation accumulated in a water year, etc. Means were computed and 99% confidence intervals were constructed around these means. Recurrence intervals and exceedance probabilities were computed for maximum SWE values and total precipitation accumulated in a water year to determine water years with anomalous precipitation. It was determined that the water year 2010-2011 received an anomalously high amount of SWE, and snow melt in the spring of this water year likely triggered recent mass waste movements. This data is further supported by Google Earth imagery, showing movements between 2009 and 2011. Return intervals for the maximum SWE value in 2010-11 for the Placer Basin SNOTEL site was 34 years, while return intervals for the Box Canyon and Monument Peak SNOTEL sites were 17.5 and 17 years respectively. Max SWE values lie outside the
WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.
Grech, Victor
2018-03-01
The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
Another look at confidence intervals: Proposal for a more relevant and transparent approach
NASA Astrophysics Data System (ADS)
Biller, Steven D.; Oser, Scott M.
2015-02-01
The behaviors of various confidence/credible interval constructions are explored, particularly in the region of low event numbers where methods diverge most. We highlight a number of challenges, such as the treatment of nuisance parameters, and common misconceptions associated with such constructions. An informal survey of the literature suggests that confidence intervals are not always defined in relevant ways and are too often misinterpreted and/or misapplied. This can lead to seemingly paradoxical behaviors and flawed comparisons regarding the relevance of experimental results. We therefore conclude that there is a need for a more pragmatic strategy which recognizes that, while it is critical to objectively convey the information content of the data, there is also a strong desire to derive bounds on model parameter values and a natural instinct to interpret things this way. Accordingly, we attempt to put aside philosophical biases in favor of a practical view to propose a more transparent and self-consistent approach that better addresses these issues.
Spacecraft utility and the development of confidence intervals for criticality of anomalies
NASA Technical Reports Server (NTRS)
Williams, R. E.
1980-01-01
The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce
2010-01-01
The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…
Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond
ERIC Educational Resources Information Center
Wiens, Stefan; Nilsson, Mats E.
2017-01-01
Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…
SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies
ERIC Educational Resources Information Center
Yurdugul, Halil
2009-01-01
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.
NASA Astrophysics Data System (ADS)
Zhang, Li
With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
Rahn, Anne C; Backhus, Imke; Fuest, Franz; Riemann-Lorenz, Karin; Köpke, Sascha; van de Roemer, Adrianus; Mühlhauser, Ingrid; Heesen, Christoph
2016-09-20
Presentation of confidence intervals alongside information about treatment effects can support informed treatment choices in people with multiple sclerosis. We aimed to develop and pilot-test different written patient information materials explaining confidence intervals in people with relapsing-remitting multiple sclerosis. Further, a questionnaire on comprehension of confidence intervals was developed and piloted. We developed different patient information versions aiming to explain confidence intervals. We used an illustrative example to test three different approaches: (1) short version, (2) "average weight" version and (3) "worm prophylaxis" version. Interviews were conducted using think-aloud and teach-back approaches to test feasibility and analysed using qualitative content analysis. To assess comprehension of confidence intervals, a six-item multiple choice questionnaire was developed and tested in a pilot randomised controlled trial using the online survey software UNIPARK. Here, the average weight version (intervention group) was tested against a standard patient information version on confidence intervals (control group). People with multiple sclerosis were invited to take part using existing mailing-lists of people with multiple sclerosis in Germany and were randomised using the UNIPARK algorithm. Participants were blinded towards group allocation. Primary endpoint was comprehension of confidence intervals, assessed with the six-item multiple choice questionnaire with six points representing perfect knowledge. Feasibility of the patient information versions was tested with 16 people with multiple sclerosis. For the pilot randomised controlled trial, 64 people with multiple sclerosis were randomised (intervention group: n = 36; control group: n = 28). More questions were answered correctly in the intervention group compared to the control group (mean 4.8 vs 3.8, mean difference 1.1 (95 % CI 0.42-1.69), p = 0.002). The questionnaire
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Zhang, Zhiyong; Yuan, Ke-Hai
2015-01-01
Cronbach’s coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald’s omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega. PMID:29795870
NASA Technical Reports Server (NTRS)
Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald
2007-01-01
In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.
2008-01-01
an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.
Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond
Wiens, Stefan; Nilsson, Mats E.
2016-01-01
Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes? PMID:29805179
Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D
2014-08-01
Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland
Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika
2013-03-01
Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.
Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor
2016-11-01
The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.
Confidence interval or p-value?: part 4 of a series on evaluation of scientific publications.
du Prel, Jean-Baptist; Hommel, Gerhard; Röhrig, Bernd; Blettner, Maria
2009-05-01
An understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts. The uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles. P-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.
NASA Astrophysics Data System (ADS)
Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas
2003-06-01
14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).
’Exact’ Two-Sided Confidence Intervals on Nonnegative Linear Combinations of Variances.
1980-07-01
Colorado State University ( 042_402) II. CONTrOLLING OFFICE NAME AND ADDRESS It. REPORT OAT Office of Naval Rsearch -// 1 Jul MjW80 Statistics and...MONNEGATIVE LINEAR COMBINATIONS OF VARIANCES by Franklin A. Graybill Colorado State University and Chih-Ming Wang SPSS Inc. 1. Introduction In a paper to soon...1 + a2’ called the Nodf Led Lace Sample (HLS) confidence interval, is in 2. Aoce-3Ion For DDC TAO u*.- *- -. n c edI Ju.-’I if iction_, i !~BV . . I
ERIC Educational Resources Information Center
Odgaard, Eric C.; Fowler, Robert L.
2010-01-01
Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…
Zhang, Zhiyong; Yuan, Ke-Hai
2016-06-01
Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.
Self-confidence and affect responses to short-term sprint interval training.
Selmi, Walid; Rebai, Haithem; Chtara, Mokhtar; Naceur, Abdelmajid; Sahli, Sonia
2018-05-01
The study aimed to investigate the effects of repeated sprint (RS) training on somatic anxiety (SA), cognitive anxiety (CA), self-confidence (SC), rating of perceived exertion (RPE) and repeated sprint ability (RSA) indicators in elite young soccer players. Thirty elite soccer players in the first football league (age: 17.8±0.9years) volunteered to participate in this study. They were randomly assigned to one of two groups: a repeated sprint training group (RST-G; n=15) and a control group (CON-G; n=15). RST-G participated in 6weeks of intensive training based on RS (6×(20+20m) runs, with 20s passive recovery interval between sprints, 3 times/week). Before and after the 6-week intervention, all participants performed a RSA test and completed a Competitive Scale Anxiety Inventory (CSAI-2) and the RPE. After training RST-G showed a very significant (p<0.000) increase in RSA total time performance relative to controls. Despite the faster sprint pace, the RPE also decreased significantly (p<0.005) in RST-G, and their self confidence was significantly greater (p<0.01), while the cognitive (p<0.01) and somatic (p<0.000) components of their anxiety state decreased. When practiced regularly, short bouts of sprint exercises improve anaerobic performance associated with a reduction in anxiety state and an increase in SC which may probably boost competitive performance. Copyright © 2018 Elsevier Inc. All rights reserved.
Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A
2013-06-27
The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.
Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...
Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin
2014-01-01
The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553
Using confidence intervals to evaluate the focus alignment of spectrograph detector arrays.
Sawyer, Travis W; Hawkins, Kyle S; Damento, Michael
2017-06-20
High-resolution spectrographs extract detailed spectral information of a sample and are frequently used in astronomy, laser-induced breakdown spectroscopy, and Raman spectroscopy. These instruments employ dispersive elements such as prisms and diffraction gratings to spatially separate different wavelengths of light, which are then detected by a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) detector array. Precise alignment along the optical axis (focus position) of the detector array is critical to maximize the instrumental resolution; however, traditional approaches of scanning the detector through focus lack a quantitative measure of precision, limiting the repeatability and relying on one's experience. Here we propose a method to evaluate the focus alignment of spectrograph detector arrays by establishing confidence intervals to measure the alignment precision. We show that propagation of uncertainty can be used to estimate the variance in an alignment, thus providing a quantitative and repeatable means to evaluate the precision and confidence of an alignment. We test the approach by aligning the detector array of a prototype miniature echelle spectrograph. The results indicate that the procedure effectively quantifies alignment precision, enabling one to objectively determine when an alignment has reached an acceptable level. This quantitative approach also provides a foundation for further optimization, including automated alignment. Furthermore, the procedure introduced here can be extended to other alignment techniques that rely on numerically fitting data to a model, providing a general framework for evaluating the precision of alignment methods.
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
CI2 for creating and comparing confidence-intervals for time-series bivariate plots.
Mullineaux, David R
2017-02-01
Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.
Raykov, Tenko; Zinbarg, Richard E
2011-05-01
A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.
ERIC Educational Resources Information Center
Ruscio, John; Mullen, Tara
2012-01-01
It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…
Brand, Andrew; Bradley, Michael T
2016-02-01
Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.
Odgaard, Eric C; Fowler, Robert L
2010-06-01
In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.
Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J
2015-09-01
This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka
2015-01-01
The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel
2011-02-20
A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.
Harari, Gil
2014-01-01
Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.
Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.
Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar
2016-09-01
Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean ± confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences ± confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.
Maignen, François; Hauben, Manfred; Dogné, Jean-Michel
2017-01-01
Background: The lower bound of the 95% confidence interval of measures of disproportionality (Lower95CI) is widely used in signal detection. Masking is a statistical issue by which true signals of disproportionate reporting are hidden by the presence of other medicines. The primary objective of our study is to develop and validate a mathematical framework for assessing the masking effect of Lower95CI. Methods: We have developed our new algorithm based on the masking ratio (MR) developed for the measures of disproportionality. A MR for the Lower95CI (MRCI) is proposed. A simulation study to validate this algorithm was also conducted. Results: We have established the existence of a very close mathematical relation between MR and MRCI. For a given drug–event pair, the same product will be responsible for the highest masking effect with the measure of disproportionality and its Lower95CI. The extent of masking is likely to be very similar across the two methods. An important proportion of identical drug–event associations affected by the presence of an important masking effect is revealed by the unmasking exercise, whether the proportional reporting ratio (PRR) or its confidence interval are used. Conclusion: The detection of the masking effect of Lower95CI can be automated. The real benefits of this unmasking in terms of new true-positive signals (rate of true-positive/false-positive) or time gained by the revealing of signals using this method have not been fully assessed. These benefits should be demonstrated in the context of prospective studies. PMID:28845231
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
Devenish Nelson, Eleanor S.; Harris, Stephen; Soulsbury, Carl D.; Richards, Shane A.; Stephens, Philip A.
2010-01-01
Background Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. Methodology/Principal Findings We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. Conclusions/Significance Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species. PMID:21049049
Soulakova, Julia N; Bright, Brianna C
2013-01-01
A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.
Paek, Insu
2015-01-01
The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test characteristics on four confidence interval (CI) procedures for coefficient alpha in terms of coverage rate (CR), length, and the degree of asymmetry of CI estimates. In addition, interval estimates of coefficient alpha when data follow the essentially tau-equivalent condition were investigated as a supplement to the case of dichotomous data with examinee guessing. For dichotomous data with guessing, the results did not reveal salient negative effects of guessing and its interactions with other test characteristics (sample size, test length, coefficient alpha levels) on CR and the degree of asymmetry, but the effect of guessing was salient as a main effect and an interaction effect with sample size on the length of the CI estimates, making longer CI estimates as guessing increases, especially when combined with a small sample size. Other important effects (e.g., CI procedures on CR) are also discussed. PMID:29795863
Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili
2014-03-01
Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Identifying the bad guy in a lineup using confidence judgments under deadline pressure.
Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen
2012-10-01
Eyewitness-identification tests often culminate in witnesses not picking the culprit or identifying innocent suspects. We tested a radical alternative to the traditional lineup procedure used in such tests. Rather than making a positive identification, witnesses made confidence judgments under a short deadline about whether each lineup member was the culprit. We compared this deadline procedure with the traditional sequential-lineup procedure in three experiments with retention intervals ranging from 5 min to 1 week. A classification algorithm that identified confidence criteria that optimally discriminated accurate from inaccurate decisions revealed that decision accuracy was 24% to 66% higher under the deadline procedure than under the traditional procedure. Confidence profiles across lineup stimuli were more informative than were identification decisions about the likelihood that an individual witness recognized the culprit or correctly recognized that the culprit was not present. Large differences between the maximum and the next-highest confidence value signaled very high accuracy. Future support for this procedure across varied conditions would highlight a viable alternative to the problematic lineup procedures that have traditionally been used by law enforcement.
Confidence intervals for single-case effect size measures based on randomization test inversion.
Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick
2017-02-01
In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.
Yin, Jingjing; Nakas, Christos T; Tian, Lili; Reiser, Benjamin
2018-03-01
This article explores both existing and new methods for the construction of confidence intervals for differences of indices of diagnostic accuracy of competing pairs of biomarkers in three-class classification problems and fills the methodological gaps for both parametric and non-parametric approaches in the receiver operating characteristic surface framework. The most widely used such indices are the volume under the receiver operating characteristic surface and the generalized Youden index. We describe implementation of all methods and offer insight regarding the appropriateness of their use through a large simulation study with different distributional and sample size scenarios. Methods are illustrated using data from the Alzheimer's Disease Neuroimaging Initiative study, where assessment of cognitive function naturally results in a three-class classification setting.
Neutron multiplicity counting: Confidence intervals for reconstruction parameters
Verbeke, Jerome M.
2016-03-09
From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1986-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.
Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.
Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria
2010-08-06
Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.
Estimation and confidence intervals for empirical mixing distributions
Link, W.A.; Sauer, J.R.
1995-01-01
Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.
Garg, Harish
2013-03-01
The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Ishwaran, Hemant; Lu, Min
2018-06-04
Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.
Confidence Intervals for Laboratory Sonic Boom Annoyance Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Commercial supersonic flight is currently forbidden over land because sonic booms have historically caused unacceptable annoyance levels in overflown communities. NASA is providing data and expertise to noise regulators as they consider relaxing the ban for future quiet supersonic aircraft. One deliverable NASA will provide is a predictive model for indoor annoyance to aid in setting an acceptable quiet sonic boom threshold. A laboratory study was conducted to determine how indoor vibrations caused by sonic booms affect annoyance judgments. The test method required finding the point of subjective equality (PSE) between sonic boom signals that cause vibrations and signals not causing vibrations played at various amplitudes. This presentation focuses on a few statistical techniques for estimating the interval around the PSE. The techniques examined are the Delta Method, Parametric and Nonparametric Bootstrapping, and Bayesian Posterior Estimation.
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J
1998-09-01
To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.
Lung Sliding Identification Is Less Accurate in the Left Hemithorax.
Piette, Eric; Daoust, Raoul; Lambert, Jean; Denault, André
2017-02-01
The aim of our study was to compare the accuracy of lung sliding identification for the left and right hemithoraxes, using prerecorded short US sequences, in a group of physicians with mixed clinical and US training. A total of 140 US sequences of a complete respiratory cycle were recorded in the operating room. Each sequence was divided in two, yielding 140 sequences of present lung sliding and 140 sequences of absent lung sliding. Of these 280 sequences, 40 were randomly repeated to assess intraobserver variability, for a total of 320 sequences. Descriptive data, the mean accuracy of each participant, as well as the rate of correct answers for each of the original 280 sequences were tabulated and compared for different subgroups of clinical and US training. A video with examples of present and absent lung sliding and a lung pulse was shown before testing. Two sessions were planned to facilitate the participation of 75 clinicians. In the first group, the rate of accurate lung sliding identification was lower in the left hemithorax than in the right (67.0% [interquartile range (IQR), 43.0-83.0] versus 80.0% [IQR, 57.0-95.0]; P < .001). In the second group, the rate of accurate lung sliding identification was also lower in the left hemithorax than in the right (76.3% [IQR, 42.9-90.9] versus 88.7% [IQR, 63.1-96.9]; P = .001). Mean accuracy rates were 67.5% (95% confidence interval, 65.7-69.4) in the first group and 73.1% (95% confidence interval, 70.7-75.5) in the second (P < .001). Lung sliding identification seems less accurate in the left hemithorax when using a short US examination. This study was done on recorded US sequences and should be repeated in a live clinical situation to confirm our results. © 2016 by the American Institute of Ultrasound in Medicine.
The Logic of Summative Confidence
ERIC Educational Resources Information Center
Gugiu, P. Cristian
2007-01-01
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
ERIC Educational Resources Information Center
Inzunsa Cazares, Santiago
2016-01-01
This article presents the results of a qualitative research with a group of 15 university students of social sciences on informal inferential reasoning developed in a computer environment on concepts involved in the confidence intervals. The results indicate that students developed a correct reasoning about sampling variability and visualized…
Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data
ERIC Educational Resources Information Center
Bonett, Douglas G.; Price, Robert M.
2012-01-01
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
Memory conformity affects inaccurate memories more than accurate memories.
Wright, Daniel B; Villalba, Daniella K
2012-01-01
After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
R package to estimate intracluster correlation coefficient with confidence interval for binary data.
Chakraborty, Hrishikesh; Hossain, Akhtar
2018-03-01
The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.
Confidence Intervals for Squared Semipartial Correlation Coefficients: The Effect of Nonnormality
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2010-01-01
The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…
Five-Year Risk of Interval-Invasive Second Breast Cancer
Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.
2015-01-01
Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.
2012-01-01
Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
Miedema, H M; Oudshoorn, C G
2001-01-01
We present a model of the distribution of noise annoyance with the mean varying as a function of the noise exposure. Day-night level (DNL) and day-evening-night level (DENL) were used as noise descriptors. Because the entire annoyance distribution has been modeled, any annoyance measure that summarizes this distribution can be calculated from the model. We fitted the model to data from noise annoyance studies for aircraft, road traffic, and railways separately. Polynomial approximations of relationships implied by the model for the combinations of the following exposure and annoyance measures are presented: DNL or DENL, and percentage "highly annoyed" (cutoff at 72 on a scale of 0-100), percentage "annoyed" (cutoff at 50 on a scale of 0-100), or percentage (at least) "a little annoyed" (cutoff at 28 on a scale of 0-100). These approximations are very good, and they are easier to use for practical calculations than the model itself, because the model involves a normal distribution. Our results are based on the same data set that was used earlier to establish relationships between DNL and percentage highly annoyed. In this paper we provide better estimates of the confidence intervals due to the improved model of the relationship between annoyance and noise exposure. Moreover, relationships using descriptors other than DNL and percentage highly annoyed, which are presented here, have not been established earlier on the basis of a large dataset. PMID:11335190
Conny, J M; Norris, G A; Gould, T R
2009-03-09
Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (sigma(Char)) and BC (sigma(BC)), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for sigma(BC), which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the sigma(BC) and sigma(Char) surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 degrees C and 850 degrees C as most suitable for the helium high-temperature step lasting 150s. However, a temperature as low as 750 degrees C could not be rejected statistically.
Destination memory accuracy and confidence in younger and older adults.
Johnson, Tara L; Jefferson, Susan C
2018-01-01
Background/Study Context: Nascent research on destination memory-remembering to whom we tell particular information-suggested that older adults have deficits in destination memory and are more confident on inaccurate responses than younger adults. This study assessed the effects of age, attentional resources, and mental imagery on destination memory accuracy and confidence in younger and older adults. Using computer format, participants told facts to pictures of famous people in one of four conditions (control, self-focus, refocus, imagery). Older adults had lower destination memory accuracy than younger adults, driven by a higher level of false alarms. Whereas younger adults were more confident in accurate answers, older adults were more confident in inaccurate answers. Accuracy across participants was lowest when attention was directed internally but significantly improved when mental imagery was used. Importantly, the age-related differences in false alarms and high-confidence inaccurate answers disappeared when imagery was used. Older adults are more likely than younger adults to commit destination memory errors and are less accurate in related confidence judgments. Furthermore, the use of associative memory strategies may help improve destination memory across age groups, improve the accuracy of confidence judgments in older adults, and decrease age-related destination memory impairment, particularly in young-old adults.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Tekin, Eylul; Roediger, Henry L
2017-01-01
Researchers use a wide range of confidence scales when measuring the relationship between confidence and accuracy in reports from memory, with the highest number usually representing the greatest confidence (e.g., 4-point, 20-point, and 100-point scales). The assumption seems to be that the range of the scale has little bearing on the confidence-accuracy relationship. In two old/new recognition experiments, we directly investigated this assumption using word lists (Experiment 1) and faces (Experiment 2) by employing 4-, 5-, 20-, and 100-point scales. Using confidence-accuracy characteristic (CAC) plots, we asked whether confidence ratings would yield similar CAC plots, indicating comparability in use of the scales. For the comparisons, we divided 100-point and 20-point scales into bins of either four or five and asked, for example, whether confidence ratings of 4, 16-20, and 76-100 would yield similar values. The results show that, for both types of material, the different scales yield similar CAC plots. Notably, when subjects express high confidence, regardless of which scale they use, they are likely to be very accurate (even though they studied 100 words and 50 faces in each list in 2 experiments). The scales seem convertible from one to the other, and choice of scale range probably does not affect research into the relationship between confidence and accuracy. High confidence indicates high accuracy in recognition in the present experiments.
New Approaches to Robust Confidence Intervals for Location: A Simulation Study.
1984-06-01
obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined
Concept analysis: confidence/self-confidence.
Perry, Patricia
2011-01-01
Confidence and self-confidence are crucial practice elements in nursing education and practice. Nurse educators should have an understanding of the concept of confidence in order to assist in the accomplishment of nursing students and their learning of technical and nontechnical skills. With the aim of facilitating trusted care of patients in the healthcare setting, nursing professionals must exhibit confidence, and, as such, clarification and analysis of its meaning is necessary. The purpose of this analysis is to provide clarity to the meaning of the concept confidence/self-confidence, while gaining a more comprehensive understanding of its attributes, antecedents, and consequences. Walker and Avant's eight-step method of concept analysis was utilized for the framework of the analysis process with model, contrary, borderline, and related cases presented along with attributes, antecedents, consequences, and empirical referents identified. Understanding both the individualized development of confidence among prelicensure nursing students and the role of the nurse educator in the development of confident nursing practice, nurse educators can assist students in the development of confidence and competency. Future research surrounding the nature and development of confidence/self-confidence in the prelicensure nursing student experiencing human patient simulation sessions would assist to help educators further promote the development of confidence. © 2011 Wiley Periodicals, Inc.
Confidence limit calculation for antidotal potency ratio derived from lethal dose 50
Manage, Ananda; Petrikovics, Ilona
2013-01-01
AIM: To describe confidence interval calculation for antidotal potency ratios using bootstrap method. METHODS: We can easily adapt the nonparametric bootstrap method which was invented by Efron to construct confidence intervals in such situations like this. The bootstrap method is a resampling method in which the bootstrap samples are obtained by resampling from the original sample. RESULTS: The described confidence interval calculation using bootstrap method does not require the sampling distribution antidotal potency ratio. This can serve as a substantial help for toxicologists, who are directed to employ the Dixon up-and-down method with the application of lower number of animals to determine lethal dose 50 values for characterizing the investigated toxic molecules and eventually for characterizing the antidotal protections by the test antidotal systems. CONCLUSION: The described method can serve as a useful tool in various other applications. Simplicity of the method makes it easier to do the calculation using most of the programming software packages. PMID:25237618
Five-year risk of interval-invasive second breast cancer.
Lee, Janie M; Buist, Diana S M; Houssami, Nehmat; Dowling, Emily C; Halpern, Elkan F; Gazelle, G Scott; Lehman, Constance D; Henderson, Louise M; Hubbard, Rebecca A
2015-07-01
Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. © The Author 2015. Published by Oxford University Press. All rights reserved
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2015-01-01
In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958
Variation in polyp size estimation among endoscopists and impact on surveillance intervals.
Chaptini, Louis; Chaaya, Adib; Depalma, Fedele; Hunter, Krystal; Peikin, Steven; Laine, Loren
2014-10-01
Accurate estimation of polyp size is important because it is used to determine the surveillance interval after polypectomy. To evaluate the variation and accuracy in polyp size estimation among endoscopists and the impact on surveillance intervals after polypectomy. Web-based survey. A total of 873 members of the American Society for Gastrointestinal Endoscopy. Participants watched video recordings of 4 polypectomies and were asked to estimate the polyp sizes. Proportion of participants with polyp size estimates within 20% of the correct measurement and the frequency of incorrect surveillance intervals based on inaccurate size estimates. Polyp size estimates were within 20% of the correct value for 1362 (48%) of 2812 estimates (range 39%-59% for the 4 polyps). Polyp size was overestimated by >20% in 889 estimates (32%, range 15%-49%) and underestimated by >20% in 561 (20%, range 4%-46%) estimates. Incorrect surveillance intervals because of overestimation or underestimation occurred in 272 (10%) of the 2812 estimates (range 5%-14%). Participants in a private practice setting overestimated the size of 3 or of all 4 polyps by >20% more often than participants in an academic setting (difference = 7%; 95% confidence interval, 1%-11%). Survey design with the use of video clips. Substantial overestimation and underestimation of polyp size occurs with visual estimation leading to incorrect surveillance intervals in 10% of cases. Our findings support routine use of measurement tools to improve polyp size estimates. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.
Wixted, John T; Wells, Gary L
2017-05-01
The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).
Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith
2017-01-01
When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.
ASTRAL, DRAGON and SEDAN scores predict stroke outcome more accurately than physicians.
Ntaios, G; Gioulekas, F; Papavasileiou, V; Strbian, D; Michel, P
2016-11-01
ASTRAL, SEDAN and DRAGON scores are three well-validated scores for stroke outcome prediction. Whether these scores predict stroke outcome more accurately compared with physicians interested in stroke was investigated. Physicians interested in stroke were invited to an online anonymous survey to provide outcome estimates in randomly allocated structured scenarios of recent real-life stroke patients. Their estimates were compared to scores' predictions in the same scenarios. An estimate was considered accurate if it was within 95% confidence intervals of actual outcome. In all, 244 participants from 32 different countries responded assessing 720 real scenarios and 2636 outcomes. The majority of physicians' estimates were inaccurate (1422/2636, 53.9%). 400 (56.8%) of physicians' estimates about the percentage probability of 3-month modified Rankin score (mRS) > 2 were accurate compared with 609 (86.5%) of ASTRAL score estimates (P < 0.0001). 394 (61.2%) of physicians' estimates about the percentage probability of post-thrombolysis symptomatic intracranial haemorrhage were accurate compared with 583 (90.5%) of SEDAN score estimates (P < 0.0001). 160 (24.8%) of physicians' estimates about post-thrombolysis 3-month percentage probability of mRS 0-2 were accurate compared with 240 (37.3%) DRAGON score estimates (P < 0.0001). 260 (40.4%) of physicians' estimates about the percentage probability of post-thrombolysis mRS 5-6 were accurate compared with 518 (80.4%) DRAGON score estimates (P < 0.0001). ASTRAL, DRAGON and SEDAN scores predict outcome of acute ischaemic stroke patients with higher accuracy compared to physicians interested in stroke. © 2016 EAN.
TIME-INTERVAL MEASURING DEVICE
Gross, J.E.
1958-04-15
An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.
Overconfidence in Interval Estimates: What Does Expertise Buy You?
ERIC Educational Resources Information Center
McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan
2008-01-01
People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…
The Sense of Confidence during Probabilistic Learning: A Normative Account.
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-06-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core
The Sense of Confidence during Probabilistic Learning: A Normative Account
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-01-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core
García-Pérez, Miguel A.; Alcalá-Quintana, Rocío
2016-01-01
Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157–1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424
Istaces, Nicolas; Gulbis, Béatrice
2015-07-01
Personalized ranges of liver fibrosis serum biomarkers such as FibroTest or hyaluronic acid could be used for early detection of fibrotic changes in patients with progressive chronic liver disease. Our aim was to generate reliable biological variation estimates for these two biomarkers with confidence intervals for within-subject biological variation and reference change value. Nine fasting healthy volunteers and 66 chronic liver disease patients were included. Biological variation estimates were calculated for FibroTest in healthy volunteers, and for hyaluronic acid in healthy volunteers and chronic liver disease patients stratified by etiology and liver fibrosis stage. In healthy volunteers, within-subject biological coefficient of variation (with 95% confidence intervals) and index of individuality were 20% (16%-28%) and 0.6 for FibroTest and 34% (27%-47%) and 0.79 for hyaluronic acid, respectively. Overall hyaluronic acid within-subject biological coefficient of variation was similar among non-alcoholic fatty liver disease and chronic hepatitis C with 41% (34%-52%) and 45% (39%-55%), respectively, in contrast to chronic hepatitis B with 170% (140%-215%). Hyaluronic acid within-subject biological coefficients of variation were similar between F0-F1, F2 and F3 liver fibrosis stages in non-alcoholic fatty liver disease with 34% (25%-49%), 41% (31%-59%) and 34% (23%-62%), respectively, and in chronic hepatitis C with 34% (27%-47%), 33% (26%-45%) and 38% (27%-65%), respectively. However, corresponding hyaluronic acid indexes of individuality were lower in the higher fibrosis stages. Non-overlapping confidence intervals of biological variation estimates allowed us to detect significant differences regarding hyaluronic acid biological variation between chronic liver disease subgroups. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.
Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E
2015-10-01
Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
Cortical reinstatement and the confidence and accuracy of source memory.
Thakral, Preston P; Wang, Tracy H; Rugg, Michael D
2015-04-01
Cortical reinstatement refers to the overlap between neural activity elicited during the encoding and the subsequent retrieval of an episode, and is held to reflect retrieved mnemonic content. Previous findings have demonstrated that reinstatement effects reflect the quality of retrieved episodic information as this is operationalized by the accuracy of source memory judgments. The present functional magnetic resonance imaging (fMRI) study investigated whether reinstatement-related activity also co-varies with the confidence of accurate source judgments. Participants studied pictures of objects along with their visual or spoken names. At test, they first discriminated between studied and unstudied pictures and then, for each picture judged as studied, they also judged whether it had been paired with a visual or auditory name, using a three-point confidence scale. Accuracy of source memory judgments- and hence the quality of the source-specifying information--was greater for high than for low confidence judgments. Modality-selective retrieval-related activity (reinstatement effects) also co-varied with the confidence of the corresponding source memory judgment. The findings indicate that the quality of the information supporting accurate judgments of source memory is indexed by the relative magnitude of content-selective, retrieval-related neural activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Is photometry an accurate and reliable method to assess boar semen concentration?
Camus, A; Camugli, S; Lévêque, C; Schmitt, E; Staub, C
2011-02-01
Sperm concentration assessment is a key point to insure appropriate sperm number per dose in species subjected to artificial insemination (AI). The aim of the present study was to evaluate the accuracy and reliability of two commercially available photometers, AccuCell™ and AccuRead™ pre-calibrated for boar semen in comparison to UltiMate™ boar version 12.3D, NucleoCounter SP100 and Thoma hemacytometer. For each type of instrument, concentration was measured on 34 boar semen samples in quadruplicate and agreement between measurements and instruments were evaluated. Accuracy for both photometers was illustrated by mean of percentage differences to the general mean. It was -0.6% and 0.5% for Accucell™ and Accuread™ respectively, no significant differences were found between instrument and mean of measurement among all equipment. Repeatability for both photometers was 1.8% and 3.2% for AccuCell™ and AccuRead™ respectively. Low differences were observed between instruments (confidence interval 3%) except when hemacytometer was used as a reference. Even though hemacytometer is considered worldwide as the gold standard, it is the more variable instrument (confidence interval 7.1%). The conclusion is that routine photometry measures of raw semen concentration are reliable, accurate and precise using AccuRead™ or AccuCell™. There are multiple steps in semen processing that can induce sperm loss and therefore increase differences between theoretical and real sperm numbers in doses. Potential biases that depend on the workflow but not on the initial photometric measure of semen concentration are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
Extended score interval in the assessment of basic surgical skills.
Acosta, Stefan; Sevonius, Dan; Beckman, Anders
2015-01-01
The Basic Surgical Skills course uses an assessment score interval of 0-3. An extended score interval, 1-6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0-3 scored version compared to a proposed 1-6 scored version. Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). The distribution of scores for 'knot tying' at the last time point and 'bowel anastomosis side to side' given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77-0.78) on Day 1 to 0.83 (95% CI 0.51-0.94) on Day 2. A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course.
How complete and accurate is meningococcal disease notification?
Breen, E; Ghebrehewet, S; Regan, M; Thomson, A P J
2004-12-01
Effective public health control of meningococcal disease (meningococcal meningitis and septicaemia) is dependent on complete, accurate and speedy notification. Using capture-recapture techniques this study assesses the completeness, accuracy and timeliness of meningococcal notification in a health authority. The completeness of meningococcal disease notification was 94.8% (95% confidence interval 93.2% to 96.2%); 91.2% of cases in 2001 were notified within 24 hours of diagnosis, but 28.0% of notifications in 2001 were false positives. Clinical staff need to be aware of the public health implications of a notification of meningococcal disease, and of failure of, or delay in notification. Incomplete or delayed notification not only leads to inaccurate data collection but also means that important public health measures may not be taken. A clinical diagnosis of meningococcal disease should be carefully considered between the clinician and the consultant in communicable disease control (CCDC). Otherwise, prophylaxis may be given unnecessarily, disease incidence inflated, and the benefits of control measures underestimated. Consultants in communicable disease control (CCDCs), in conjunction with clinical staff, should de-notify meningococcal disease if the diagnosis changes.
Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard
2014-01-01
The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.
Reliable prediction intervals with regression neural networks.
Papadopoulos, Harris; Haralambous, Haris
2011-10-01
This paper proposes an extension to conventional regression neural networks (NNs) for replacing the point predictions they produce with prediction intervals that satisfy a required level of confidence. Our approach follows a novel machine learning framework, called Conformal Prediction (CP), for assigning reliable confidence measures to predictions without assuming anything more than that the data are independent and identically distributed (i.i.d.). We evaluate the proposed method on four benchmark datasets and on the problem of predicting Total Electron Content (TEC), which is an important parameter in trans-ionospheric links; for the latter we use a dataset of more than 60000 TEC measurements collected over a period of 11 years. Our experimental results show that the prediction intervals produced by our method are both well calibrated and tight enough to be useful in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.
Assured Human-Autonomy Interaction through Machine Self-Confidence
NASA Astrophysics Data System (ADS)
Aitken, Matthew
Autonomous systems employ many layers of approximations in order to operate in increasingly uncertain and unstructured environments. The complexity of these systems makes it hard for a user to understand the systems capabilities, especially if the user is not an expert. However, if autonomous systems are to be used efficiently, their users must trust them appropriately. This purpose of this work is to implement and assess an 'assurance' that an autonomous system can provide to the user to elicit appropriate trust. Specifically, the autonomous system's perception of its own capabilities is reported to the user as the self-confidence assurance. The self-confidence assurance should allow the user to more quickly and accurately assess the autonomous system's capabilities, generating appropriate trust in the autonomous system. First, this research defines self-confidence and discusses what the self-confidence assurance is attempting to communicate to the user. Then it provides a framework for computing the autonomous system's self-confidence as a function of self-confidence factors which correspond to individual elements in the autonomous system's process. In order to explore this idea, self-confidence is implemented on an autonomous system that uses a mixed observability Markov decision process model to solve a pursuit-evasion problem on a road network. The implementation of a factor assessing the goodness of the autonomy's expected performance is focused on in particular. This work highlights some of the issues and considerations in the design of appropriate metrics for the self-confidence factors, and provides the basis for future research for computing self-confidence in autonomous systems.
The integrated model of sport confidence: a canonical correlation and mediational analysis.
Koehn, Stefan; Pearce, Alan J; Morris, Tony
2013-12-01
The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.
Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N
2016-06-01
In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Doré, B.P.; Meksin, R.; Mather, M.; Hirst, W.; Ochsner, K.N
2016-01-01
In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting 1) the overall intensity of their future negative emotion, and 2) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. PMID:27100309
Lyons-Amos, Mark; Padmadas, Sabu S; Durrant, Gabriele B
2014-08-11
To test the contraceptive confidence hypothesis in a modern context. The hypothesis is that women using effective or modern contraceptive methods have increased contraceptive confidence and hence a shorter interval between marriage and first birth than users of ineffective or traditional methods. We extend the hypothesis to incorporate the role of abortion, arguing that it acts as a substitute for contraception in the study context. Moldova, a country in South-East Europe. Moldova exhibits high use of traditional contraceptive methods and abortion compared with other European countries. Data are from a secondary analysis of the 2005 Moldovan Demographic and Health Survey, a nationally representative sample survey. 5377 unmarried women were selected. The outcome measure was the interval between marriage and first birth. This was modelled using a piecewise-constant hazard regression, with abortion and contraceptive method types as primary variables along with relevant sociodemographic controls. Women with high contraceptive confidence (modern method users) have a higher cumulative hazard of first birth 36 months following marriage (0.88 (0.87 to 0.89)) compared with women with low contraceptive confidence (traditional method users, cumulative hazard: 0.85 (0.84 to 0.85)). This is consistent with the contraceptive confidence hypothesis. There is a higher cumulative hazard of first birth among women with low (0.80 (0.79 to 0.80)) and moderate abortion propensities (0.76 (0.75 to 0.77)) than women with no abortion propensity (0.73 (0.72 to 0.74)) 24 months after marriage. Effective contraceptive use tends to increase contraceptive confidence and is associated with a shorter interval between marriage and first birth. Increased use of abortion also tends to increase contraceptive confidence and shorten birth duration, although this effect is non-linear-women with a very high use of abortion tend to have lengthy intervals between marriage and first birth. Published by
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L
2017-11-01
When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.
Extended score interval in the assessment of basic surgical skills.
Acosta, Stefan; Sevonius, Dan; Beckman, Anders
2015-01-01
Introduction The Basic Surgical Skills course uses an assessment score interval of 0-3. An extended score interval, 1-6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0-3 scored version compared to a proposed 1-6 scored version. Methods Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). Results The distribution of scores for 'knot tying' at the last time point and 'bowel anastomosis side to side' given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77-0.78) on Day 1 to 0.83 (95% CI 0.51-0.94) on Day 2. Discussion A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course.
Chen, Chen Hsiu; Kuo, Su Ching; Tang, Siew Tzuh
2017-05-01
No systematic meta-analysis is available on the prevalence of cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. To examine the prevalence of advanced/terminal cancer patients' accurate prognostic awareness and differences in accurate prognostic awareness by publication year, region, assessment method, and service received. Systematic review and meta-analysis. MEDLINE, Embase, The Cochrane Library, CINAHL, and PsycINFO were systematically searched on accurate prognostic awareness in adult patients with advanced/terminal cancer (1990-2014). Pooled prevalences were calculated for accurate prognostic awareness by a random-effects model. Differences in weighted estimates of accurate prognostic awareness were compared by meta-regression. In total, 34 articles were retrieved for systematic review and meta-analysis. At best, only about half of advanced/terminal cancer patients accurately understood their prognosis (49.1%; 95% confidence interval: 42.7%-55.5%; range: 5.4%-85.7%). Accurate prognostic awareness was independent of service received and publication year, but highest in Australia, followed by East Asia, North America, and southern Europe and the United Kingdom (67.7%, 60.7%, 52.8%, and 36.0%, respectively; p = 0.019). Accurate prognostic awareness was higher by clinician assessment than by patient report (63.2% vs 44.5%, p < 0.001). Less than half of advanced/terminal cancer patients accurately understood their prognosis, with significant variations by region and assessment method. Healthcare professionals should thoroughly assess advanced/terminal cancer patients' preferences for prognostic information and engage them in prognostic discussion early in the cancer trajectory, thus facilitating their accurate prognostic awareness and the quality of end-of-life care decision-making.
Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.
2016-01-01
Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849
Microvascular anastomosis simulation using a chicken thigh model: Interval versus massed training.
Schoeff, Stephen; Hernandez, Brian; Robinson, Derek J; Jameson, Mark J; Shonka, David C
2017-11-01
To compare the effectiveness of massed versus interval training when teaching otolaryngology residents microvascular suturing on a validated microsurgical model. Otolaryngology residents were placed into interval (n = 7) or massed (n = 7) training groups. The interval group performed three separate 30-minute practice sessions separated by at least 1 week, and the massed group performed a single 90-minute practice session. Both groups viewed a video demonstration and recorded a pretest prior to the first training session. A post-test was administered following the last practice session. At an academic medical center, 14 otolaryngology residents were assigned using stratified randomization to interval or massed training. Blinded evaluators graded performance using a validated microvascular Objective Structured Assessment of Technical Skill tool. The tool is comprised of two major components: task-specific score (TSS) and global rating scale (GRS). Participants also received pre- and poststudy surveys to compare subjective confidence in multiple aspects of microvascular skill acquisition. Overall, all residents showed increased TSS and GRS on post- versus pretest. After completion of training, the interval group had a statistically significant increase in both TSS and GRS, whereas the massed group's increase was not significant. Residents in both groups reported significantly increased levels of confidence after completion of the study. Self-directed learning using a chicken thigh artery model may benefit microsurgical skills, competence, and confidence for resident surgeons. Interval training results in significant improvement in early development of microvascular anastomosis skills, whereas massed training does not. NA. Laryngoscope, 127:2490-2494, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data
NASA Astrophysics Data System (ADS)
Singh, Vishwajit
2016-04-01
This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.
Flint, Mark; Matthews, Beren J; Limpus, Colin J; Mills, Paul C
2015-01-01
Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65-97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11-50%)], pre-albumin [two of five, 40% (95% CI 5-85%)], albumin [13 of 22, 59% (95% CI 36-79%)], total albumin [13 of 22, 59% (95% CI 36-79%)], α- [10 of 22, 45% (95% CI 24-68%)], β- [two of 10, 20% (95% CI 3-56%)], γ- [one of 10, 10% (95% CI 0.3-45%)] and β-γ-globulin [one of 12, 8% (95% CI 0.2-38%)] and total globulin [five of 22, 23% (8-45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle.
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the
Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi
2015-03-01
We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.
Base excess is an accurate predictor of elevated lactate in ED septic patients.
Montassier, Emmanuel; Batard, Eric; Segard, Julien; Hardouin, Jean-Benoît; Martinage, Arnaud; Le Conte, Philippe; Potel, Gille
2012-01-01
Prior studies showed that lactate is a useful marker in sepsis. However, lactate is often not routinely drawn or rapidly available in the emergency department (ED). The study aimed to determine if base excess (BE), widely and rapidly available in the ED, could be used as a surrogate marker for elevated lactate in ED septic patients. This was a prospective and observational cohort study. From March 2009 to March 2010, consecutive patients 18 years or older who presented to the ED with a suspected severe sepsis were enrolled in the study. Lactate and BE measurements were performed. We defined, a priori, a clinically significant lactate to be greater than 3 mmol/L and BE less than -4 mmol/L. A total of 224 patients were enrolled in the study. The average BE was -4.5 mmol/L (SD, 4.9) and the average lactate was 3.5 mmol/L (SD, 2.9). The sensitivity of a BE less than -4 mmol/L in predicting elevated lactate greater than 3 mmol/L was 91.1% (95% confidence interval, 85.5%-96.6%) and the specificity was 88.6% (95% confidence interval, 83.0%-94.2%). The area under the curve was 0.95. Base excess is an accurate marker for the prediction of elevated lactate in the ED. The measurement of BE, obtained in a few minutes in the ED, provides a secure and quick method, similar to the electrocardiogram at triage for patients with chest pain, to determine the patients with sepsis who need an early aggressive resuscitation. Copyright © 2012 Elsevier Inc. All rights reserved.
Selective attention and subjective confidence calibration.
Schoenherr, Jordan R; Leth-Steensen, Craig; Petrusic, William M
2010-02-01
In the present experiments, failures of selective visual attention were invoked using the B. A. Eriksen and C. W. Eriksen (1974) flanker task. On each trial, a three-letter stimulus array was flashed briefly, followed by a mask. The identity of the two flanking letters was response congruent, neutral, or incongruent with the identity of the middle target letter. On half of the trials, confidence ratings were obtained after each response. In the first three experiments, participants were highly overconfident in the accuracy of their responding to incongruent flanker stimulus arrays. In a final experiment, presenting a prestimulus target location cue greatly reduced both selective attention failure and overconfidence. The findings demonstrate that participants are often unaware of such selective attention failures and provide support for the notion that, in these cases, decisional processing is driven largely by the identities of the incongruent flankers. In addition, responding was invariably slower and sometimes more accurate when confidence was required than when it was not required, demonstrating that the need to provide posttrial confidence reports can affect decisional processing. Moreover, there was some evidence that the presence of neutral contextual flanking information can slow responding, suggesting that such nondiagnostic information can, indeed, contribute to decisional processing.
The Influence of Endogenous and Exogenous Spatial Attention on Decision Confidence.
Kurtz, Phillipp; Shapcott, Katharine A; Kaiser, Jochen; Schmiedt, Joscha T; Schmid, Michael C
2017-07-25
Spatial attention allows us to make more accurate decisions about events in our environment. Decision confidence is thought to be intimately linked to the decision making process as confidence ratings are tightly coupled to decision accuracy. While both spatial attention and decision confidence have been subjected to extensive research, surprisingly little is known about the interaction between these two processes. Since attention increases performance it might be expected that confidence would also increase. However, two studies investigating the effects of endogenous attention on decision confidence found contradictory results. Here we investigated the effects of two distinct forms of spatial attention on decision confidence; endogenous attention and exogenous attention. We used an orientation-matching task, comparing the two attention conditions (endogenous and exogenous) to a control condition without directed attention. Participants performed better under both attention conditions than in the control condition. Higher confidence ratings than the control condition were found under endogenous attention but not under exogenous attention. This finding suggests that while attention can increase confidence ratings, it must be voluntarily deployed for this increase to take place. We discuss possible implications of this relative overconfidence found only during endogenous attention with respect to the theoretical background of decision confidence.
NASA Astrophysics Data System (ADS)
Rouillon, M.; Taylor, M. P.; Dong, C.
2016-12-01
This research assesses the advantages of integrating field portable X-ray Fluorescence (pXRF) technology for reducing the risk and increase confidence of decision making for metal-contaminated site assessments. Metal-contaminated sites are often highly heterogeneous and require a high sampling density to accurately characterize the distribution and concentration of contaminants. The current regulatory assessment approaches rely on a small number of samples processed using standard wet-chemistry methods. In New South Wales (NSW), Australia, the current notification trigger for characterizing metal-contaminated sites require the upper 95% confidence interval of the site mean to equal or exceed the relevant guidelines. The method's low `minimum' sampling requirements can misclassify sites due to the heterogeneous nature of soil contamination, leading to inaccurate decision making. To address this issue, we propose integrating infield pXRF analysis with the established sampling method to overcome sampling limitations. This approach increases the minimum sampling resolution and reduces the 95% CI of the site mean. Infield pXRF analysis at contamination hotspots enhances sample resolution efficiently and without the need to return to the site. In this study, the current and proposed pXRF site assessment methods are compared at five heterogeneous metal-contaminated sites by analysing the spatial distribution of contaminants, 95% confidence intervals of site means, and the sampling and analysis uncertainty associated with each method. Finally, an analysis of costs associated with both the current and proposed methods is presented to demonstrate the advantages of incorporating pXRF into metal-contaminated site assessments. The data shows that pXRF integrated site assessments allows for faster, cost-efficient, characterisation of metal-contaminated sites with greater confidence for decision making.
Do physiotherapy staff record treatment time accurately? An observational study.
Bagley, Pam; Hudson, Mary; Green, John; Forster, Anne; Young, John
2009-09-01
To assess the reliability of duration of treatment time measured by physiotherapy staff in early-stage stroke patients. Comparison of physiotherapy staff's recording of treatment sessions and video recording. Rehabilitation stroke unit in a general hospital. Thirty-nine stroke patients without trunk control or who were unable to stand with an erect trunk without the support of two therapists recruited to a randomized trial evaluating the Oswestry Standing Frame. Twenty-six physiotherapy staff who were involved in patient treatment. Contemporaneous recording by physiotherapy staff of treatment time (in minutes) compared with video recording. Intraclass correlation with 95% confidence interval and the Bland and Altman method for assessing agreement by calculating the mean difference (standard deviation; 95% confidence interval), reliability coefficient and 95% limits of agreement for the differences between the measurements. The mean duration (standard deviation, SD) of treatment time recorded by physiotherapy staff was 32 (11) minutes compared with 25 (9) minutes as evidenced in the video recording. The mean difference (SD) was -6 (9) minutes (95% confidence interval (CI) -9 to -3). The reliability coefficient was 18 minutes and the 95% limits of agreement were -24 to 12 minutes. Intraclass correlation coefficient for agreement between the two methods was 0.50 (95% CI 0.12 to 0.73). Physiotherapy staff's recording of duration of treatment time was not reliable and was systematically greater than the video recording.
Probability assessment with response times and confidence in perception and knowledge.
Petrusic, William M; Baranski, Joseph V
2009-02-01
In both a perceptual and a general knowledge comparison task, participants categorized the time they took to decide, selecting one of six categories ordered from "Slow" to Fast". Subsequently, they rated confidence on a six-category scale ranging from "50%" to "100%". Participants were able to accurately scale their response times thus enabling the treatment of the response time (RT) categories as potential confidence categories. Probability assessment analyses of RTs revealed indices of over/underconfidence, calibration, and resolution, each subject to the "hard-easy" effect, comparable to those obtained with the actual confidence ratings. However, in both the perceptual and knowledge domains, resolution (i.e., the ability to use the confidence categories to distinguish correct from incorrect decisions) was significantly better with confidence ratings than with RT categorization. Generally, comparable results were obtained with scaling of the objective RTs, although subjective categorization of RTs provided probability assessment indices superior to those obtained from objective RTs. Taken together, the findings do not support the view that confidence arises from a scaling of decision time.
Predictor sort sampling and one-sided confidence bounds on quantiles
Steve Verrill; Victoria L. Herian; David W. Green
2002-01-01
Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
NASA Astrophysics Data System (ADS)
Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.
2011-10-01
Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
Flint, Mark; Matthews, Beren J.; Limpus, Colin J.; Mills, Paul C.
2015-01-01
Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65–97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11–50%)], pre-albumin [two of five, 40% (95% CI 5–85%)], albumin [13 of 22, 59% (95% CI 36–79%)], total albumin [13 of 22, 59% (95% CI 36–79%)], α- [10 of 22, 45% (95% CI 24–68%)], β- [two of 10, 20% (95% CI 3–56%)], γ- [one of 10, 10% (95% CI 0.3–45%)] and β–γ-globulin [one of 12, 8% (95% CI 0.2–38%)] and total globulin [five of 22, 23% (8–45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle. PMID:27293722
The influence of interpregnancy interval on infant mortality.
McKinney, David; House, Melissa; Chen, Aimin; Muglia, Louis; DeFranco, Emily
2017-03-01
-<6 months (adjusted relative risk, 1.32; 95% confidence interval, 1.17-1.49) followed by interpregnancy intervals of 6-<12 months (adjusted relative risk, 1.16; 95% confidence interval, 1.04-1.30). Analysis stratified by maternal race revealed similar findings. Attributable risk calculation showed that 24.2% of infant mortalities following intervals of 0-<6 months and 14.1% with intervals of 6-<12 months are attributable to the short interpregnancy interval. By avoiding short interpregnancy intervals of ≤12 months we estimate that in the state of Ohio 31 infant mortalities (20 white and 8 black) per year could have been prevented and the infant mortality rate could have been reduced from 7.2-7.0 during this time frame. An interpregnancy interval of 12-60 months (1-5 years) between birth and conception of next pregnancy is associated with lowest risk of infant mortality. Public health initiatives and provider counseling to optimize birth spacing has the potential to significantly reduce infant mortality for both white and black mothers. Copyright © 2017 Elsevier Inc. All rights reserved.
Calibrating GPS With TWSTFT For Accurate Time Transfer
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT
Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines.
Gilkey, Melissa B; McRee, Annie-Laurie; Magnus, Brooke E; Reiter, Paul L; Dempsey, Amanda F; Brewer, Noel T
2016-01-01
To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents. We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children's vaccination status for vaccines including measles, mumps, and rubella (MMR), varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents' mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. A substantial minority of parents reported a history of vaccine refusal (15%) or delay (27%). Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54-0.63) as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76-0.86). Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40-1.68), varicella (OR = 1.54, 95% CI, 1.42-1.66), and flu vaccines (OR = 1.32, 95% CI, 1.23-1.42). Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children.
NASA Technical Reports Server (NTRS)
Cooksy, A. L.; Saykally, R. J.; Brown, J. M.; Evenson, K. M.
1986-01-01
Accurate values are presented for the fine-structure intervals in the 3P ground state of neutral atomic C-12 and C-13 as obtained from laser magnetic resonance spectroscopy. The rigorous analysis of C-13 hyperfine structure, the measurement of resonant fields for C-12 transitions at several additional far-infrared laser frequencies, and the increased precision of the C-12 measurements, permit significant improvement in the evaluation of these energies relative to earlier work. These results will expedite the direct and precise measurement of these transitions in interstellar sources and should assist in the determination of the interstellar C-12/C-13 abundance ratio.
On how the brain decodes vocal cues about speaker confidence.
Jiang, Xiaoming; Pell, Marc D
2015-05-01
In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by
Addressing the vaccine confidence gap.
Larson, Heidi J; Cooper, Louis Z; Eskola, Juhani; Katz, Samuel L; Ratzan, Scott
2011-08-06
Vaccines--often lauded as one of the greatest public health interventions--are losing public confidence. Some vaccine experts have referred to this decline in confidence as a crisis. We discuss some of the characteristics of the changing global environment that are contributing to increased public questioning of vaccines, and outline some of the specific determinants of public trust. Public decision making related to vaccine acceptance is neither driven by scientific nor economic evidence alone, but is also driven by a mix of psychological, sociocultural, and political factors, all of which need to be understood and taken into account by policy and other decision makers. Public trust in vaccines is highly variable and building trust depends on understanding perceptions of vaccines and vaccine risks, historical experiences, religious or political affiliations, and socioeconomic status. Although provision of accurate, scientifically based evidence on the risk-benefit ratios of vaccines is crucial, it is not enough to redress the gap between current levels of public confidence in vaccines and levels of trust needed to ensure adequate and sustained vaccine coverage. We call for more research not just on individual determinants of public trust, but on what mix of factors are most likely to sustain public trust. The vaccine community demands rigorous evidence on vaccine efficacy and safety and technical and operational feasibility when introducing a new vaccine, but has been negligent in demanding equally rigorous research to understand the psychological, social, and political factors that affect public trust in vaccines. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chevaliez, Stéphane; Dubernet, Fabienne; Dauvillier, Claude; Hézode, Christophe; Pawlotsky, Jean-Michel
2017-06-01
Sensitive and accurate hepatitis C virus (HCV) RNA detection and quantification is essential for the management of chronic hepatitis C therapy. Currently available platforms and assays are usually batched and require at least 5hours of work to complete the analyses. The aim of this study was to evaluate the ability of the newly developed Aptima HCV Quant Dx assay that eliminates the need for batch processing and automates all aspects of nucleic acid testing in a single step, to accurately detect and quantify HCV RNA in a large series of patients infected with different HCV genotypes. The limit of detection was estimated to be 2.3 IU/mL. The specificity of the assay was 98.6% (95% confidence interval: 96.1%-99.5%). Intra-assay and inter-assay coefficients of variation ranged from 0.09% to 5.61%, and 1.05% to 3.65%, respectively. The study of serum specimens from patients infected with HCV genotypes 1 to 6 showed a satisfactory relationship between HCV RNA levels measured by the Aptima HCV Quant Dx assay, and both real-time PCR comparators (Abbott RealTime HCV and Cobas AmpliPrep/Cobas TaqMan HCV Test, version 2.0, assays). the new Aptima HCV Quant Dx assay is rapid, sensitive, reasonably specific and reproducible and accurately quantifies HCV RNA in serum samples from patients with chronic HCV infection, including patients on antiviral treatment. The Aptima HCV Quant Dx assay can thus be confidently used to detect and quantify HCV RNA in both clinical trials with new anti-HCV drugs and clinical practice in Europe and the US. Copyright © 2017 Elsevier B.V. All rights reserved.
Decision time and confidence predict choosers' identification performance in photographic showups
Sagana, Anna; Sporer, Siegfried L.; Wixted, John T.
2018-01-01
In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence. PMID:29346394
Decision time and confidence predict choosers' identification performance in photographic showups.
Sauerland, Melanie; Sagana, Anna; Sporer, Siegfried L; Wixted, John T
2018-01-01
In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence.
Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines
Gilkey, Melissa B.; McRee, Annie-Laurie; Magnus, Brooke E.; Reiter, Paul L.; Dempsey, Amanda F.; Brewer, Noel T.
2016-01-01
Objective To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents. Methods We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children’s vaccination status for vaccines including measles, mumps, and rubella (MMR), varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents’ mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. Results A substantial minority of parents reported a history of vaccine refusal (15%) or delay (27%). Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54–0.63) as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76–0.86). Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40–1.68), varicella (OR = 1.54, 95% CI, 1.42–1.66), and flu vaccines (OR = 1.32, 95% CI, 1.23–1.42). Conclusions Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children. PMID:27391098
Confidence Preserving Machine for Facial Action Unit Detection
Zeng, Jiabei; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Xiong, Zhang
2016-01-01
Facial action unit (AU) detection from video has been a long-standing problem in automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the Confidence Preserving Machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an “easy-to-hard” strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the testing stage, the confident classifiers provide “virtual labels” for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific (PS) classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iCPM that iteratively augments training samples to train the confident classifiers, and kCPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous datasets GFT [15], BP4D [56], DISFA [42], and RU-FACS [3] illustrate the benefits of the proposed CPM models over baseline methods and state-of-the-art semisupervised learning and transfer learning methods. PMID:27479964
ERIC Educational Resources Information Center
Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika
2013-01-01
Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…
Abbott, Eduardo F; Serrano, Valentina P; Rethlefsen, Melissa L; Pandian, T K; Naik, Nimesh D; West, Colin P; Pankratz, V Shane; Cook, David A
2018-02-01
To characterize reporting of P values, confidence intervals (CIs), and statistical power in health professions education research (HPER) through manual and computerized analysis of published research reports. The authors searched PubMed, Embase, and CINAHL in May 2016, for comparative research studies. For manual analysis of abstracts and main texts, they randomly sampled 250 HPER reports published in 1985, 1995, 2005, and 2015, and 100 biomedical research reports published in 1985 and 2015. Automated computerized analysis of abstracts included all HPER reports published 1970-2015. In the 2015 HPER sample, P values were reported in 69/100 abstracts and 94 main texts. CIs were reported in 6 abstracts and 22 main texts. Most P values (≥77%) were ≤.05. Across all years, 60/164 two-group HPER studies had ≥80% power to detect a between-group difference of 0.5 standard deviations. From 1985 to 2015, the proportion of HPER abstracts reporting a CI did not change significantly (odds ratio [OR] 2.87; 95% CI 1.04, 7.88) whereas that of main texts reporting a CI increased (OR 1.96; 95% CI 1.39, 2.78). Comparison with biomedical studies revealed similar reporting of P values, but more frequent use of CIs in biomedicine. Automated analysis of 56,440 HPER abstracts found 14,867 (26.3%) reporting a P value, 3,024 (5.4%) reporting a CI, and increased reporting of P values and CIs from 1970 to 2015. P values are ubiquitous in HPER, CIs are rarely reported, and most studies are underpowered. Most reported P values would be considered statistically significant.
Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.
Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng
2015-01-01
Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.
Fast and confident: postdicting eyewitness identification accuracy in a field study.
Sauerland, Melanie; Sporer, Siegfried L
2009-03-01
The combined postdictive value of postdecision confidence, decision time, and Remember-Know-Familiar (RKF) judgments as markers of identification accuracy was evaluated with 10 targets and 720 participants. In a pedestrian area, passers-by were asked for directions. Identifications were made from target-absent or target-present lineups. Fast (optimum time boundary at 6 seconds) and confident (optimum confidence boundary at 90%) witnesses were highly accurate, slow and nonconfident witnesses highly inaccurate. Although this combination of postdictors was clearly superior to using either postdictor by itself these combinations refer only to a subsample of choosers. Know answers were associated with higher identification performance than Familiar answers, with no difference between Remember and Know answers. The results of participants' post hoc decision time estimates paralleled those with measured decision times. To explore decision strategies of nonchoosers, three subgroups were formed according to their reasons given for rejecting the lineup. Nonchoosers indicating that the target had simply been absent made faster and more confident decisions than nonchoosers stating lack of confidence or lack of memory. There were no significant differences with regard to identification performance across nonchooser groups. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
ERIC Educational Resources Information Center
Brewer, Neil; Wells, Gary L.
2006-01-01
Discriminating accurate from mistaken eyewitness identifications is a major issue facing criminal justice systems. This study examined whether eyewitness confidence assists such decisions under a variety of conditions using a confidence-accuracy (CA) calibration approach. Participants (N = 1,200) viewed a simulated crime and attempted 2 separate…
Brain networks for confidence weighting and hierarchical inference during probabilistic learning.
Meyniel, Florent; Dehaene, Stanislas
2017-05-09
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This "confidence weighting" implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain's learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.
Brain networks for confidence weighting and hierarchical inference during probabilistic learning
Meyniel, Florent; Dehaene, Stanislas
2017-01-01
Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This “confidence weighting” implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain’s learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences. PMID:28439014
A Regions of Confidence Based Approach to Enhance Segmentation with Shape Priors.
Appia, Vikram V; Ganapathy, Balaji; Abufadel, Amer; Yezzi, Anthony; Faber, Tracy
2010-01-18
We propose an improved region based segmentation model with shape priors that uses labels of confidence/interest to exclude the influence of certain regions in the image that may not provide useful information for segmentation. These could be regions in the image which are expected to have weak, missing or corrupt edges or they could be regions in the image which the user is not interested in segmenting, but are part of the object being segmented. In the training datasets, along with the manual segmentations we also generate an auxiliary map indicating these regions of low confidence/interest. Since, all the training images are acquired under similar conditions, we can train our algorithm to estimate these regions as well. Based on this training we will generate a map which indicates the regions in the image that are likely to contain no useful information for segmentation. We then use a parametric model to represent the segmenting curve as a combination of shape priors obtained by representing the training data as a collection of signed distance functions. We evolve an objective energy functional to evolve the global parameters that are used to represent the curve. We vary the influence each pixel has on the evolution of these parameters based on the confidence/interest label. When we use these labels to indicate the regions with low confidence; the regions containing accurate edges will have a dominant role in the evolution of the curve and the segmentation in the low confidence regions will be approximated based on the training data. Since our model evolves global parameters, it improves the segmentation even in the regions with accurate edges. This is because we eliminate the influence of the low confidence regions which may mislead the final segmentation. Similarly when we use the labels to indicate the regions which are not of importance, we will get a better segmentation of the object in the regions we are interested in.
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
An approach for sample size determination of average bioequivalence based on interval estimation.
Chiang, Chieh; Hsiao, Chin-Fu
2017-03-30
In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial
Higgins, Victoria; Truong, Dorothy; Woroch, Amy; Chan, Man Khun; Tahmasebi, Houman; Adeli, Khosrow
2018-03-01
Evidence-based reference intervals (RIs) are essential to accurately interpret pediatric laboratory test results. To fill gaps in pediatric RIs, the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) project developed an age- and sex-specific pediatric RI database based on healthy pediatric subjects. Originally established for Abbott ARCHITECT assays, CALIPER RIs were transferred to assays on Beckman, Roche, Siemens, and Ortho analytical platforms. This study provides transferred reference intervals for 29 biochemical assays for the Ortho VITROS 5600 Chemistry System (Ortho). Based on Clinical Laboratory Standards Institute (CLSI) guidelines, a method comparison analysis was performed by measuring approximately 200 patient serum samples using Abbott and Ortho assays. The equation of the line of best fit was calculated and the appropriateness of the linear model was assessed. This equation was used to transfer RIs from Abbott to Ortho assays. Transferred RIs were verified using 84 healthy pediatric serum samples from the CALIPER cohort. RIs for most chemistry analytes successfully transferred from Abbott to Ortho assays. Calcium and CO 2 did not meet statistical criteria for transference (r 2 <0.70). Of the 32 transferred reference intervals, 29 successfully verified with approximately 90% of results from reference samples falling within transferred confidence limits. Transferred RIs for total bilirubin, magnesium, and LDH did not meet verification criteria and are not reported. This study broadens the utility of the CALIPER pediatric RI database to laboratories using Ortho VITROS 5600 biochemical assays. Clinical laboratories should verify CALIPER reference intervals for their specific analytical platform and local population as recommended by CLSI. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien
2018-05-01
Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
Face distinctiveness and delayed testing: differential effects on performance and confidence.
Metzger, Mitchell M
2006-04-01
The author investigated the effect of delayed testing on participants' memory for distinctive and typical faces. Participants viewed distinctive and typical faces and were then tested for recognition immediately or after a delay of 3, 6, or 12 weeks. Consistent with prior research, analysis of measure of sensitivity (d') showed that participants performed better on distinctive rather than typical faces, and memory performance declined with longer retention intervals between study and testing. Furthermore, the superior performance on distinctive faces had vanished by the 12-week test. Contrary to d' data, however, an analysis of confidence scores indicated that participants were still significantly more confident on trials depicting distinctive faces, even with a 12-week delay between study and recognition testing.
Novak, Kerri L.; Jacob, Deepti; Kaplan, Gilaad G.; Boyce, Emma; Ghosh, Subrata; Ma, Irene; Lu, Cathy; Wilson, Stephanie; Panaccione, Remo
2016-01-01
Background. Approaches to distinguish inflammatory bowel disease (IBD) from noninflammatory disease that are noninvasive, accurate, and readily available are desirable. Such approaches may decrease time to diagnosis and better utilize limited endoscopic resources. The aim of this study was to evaluate the diagnostic accuracy for gastroenterologist performed point of care ultrasound (POCUS) in the detection of luminal inflammation relative to gold standard ileocolonoscopy. Methods. A prospective, single-center study was conducted on convenience sample of patients presenting with symptoms of diarrhea and/or abdominal pain. Patients were offered POCUS prior to having ileocolonoscopy. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with 95% confidence intervals (CI), as well as likelihood ratios, were calculated. Results. Fifty-eight patients were included in this study. The overall sensitivity, specificity, PPV, and NPV were 80%, 97.8%, 88.9%, and 95.7%, respectively, with positive and negative likelihood ratios (LR) of 36.8 and 0.20. Conclusion. POCUS can accurately be performed at the bedside to detect transmural inflammation of the intestine. This noninvasive approach may serve to expedite diagnosis, improve allocation of endoscopic resources, and facilitate initiation of appropriate medical therapy. PMID:27446838
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
Is Gaydar Affected by Attitudes Toward Homosexuality? Confidence, Labeling Bias, and Accuracy.
Brewer, Gayle; Lyons, Minna
2017-01-01
Previous research has largely ignored the relationship between sexual orientation judgement accuracy, confidence, and attitudes toward homosexuality. In an online study, participants (N = 269) judged the sexual orientation of homosexual and heterosexual targets presented via a series of facial photographs. Participants also indicated their confidence in each judgment and completed the Modern Homonegativity Scale (Morrison & Morrison, 2002). We found that (1) homosexual men and heterosexual women were more accurate when judging photographs of women as opposed to photographs of men, and (2) in heterosexual men, negative attitudes toward homosexual men predicted confidence and bias when rating men's photographs. Findings indicate that homosexual men and heterosexual women are similar in terms of accuracy in judging women's sexuality. Further, especially in men, homophobia is associated with cognitive biases in labeling other men but does not have a relationship with increased accuracy.
Interpregnancy interval and risk of autistic disorder.
Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla
2013-11-01
A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals <9 months, 0.25% of the second-born children had autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.
Smalheiser, Neil R; McDonagh, Marian S; Yu, Clement; Adams, Clive E; Davis, John M; Yu, Philip S
2015-01-01
Objective: For many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT. Materials and Methods: The LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article. Results: The model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well. Discussion: Both models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified. Conclusion: Retagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies
2014-01-01
Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.
Kottas, Martina; Kuss, Oliver; Zapf, Antonia
2014-02-19
The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.
Linkage disequilibrium interval mapping of quantitative trait loci.
Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte
2006-03-16
For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.
ERIC Educational Resources Information Center
Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.
2011-01-01
Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…
The description of a method for accurately estimating creatinine clearance in acute kidney injury.
Mellas, John
2016-05-01
Acute kidney injury (AKI) is a common and serious condition encountered in hospitalized patients. The severity of kidney injury is defined by the RIFLE, AKIN, and KDIGO criteria which attempt to establish the degree of renal impairment. The KDIGO guidelines state that the creatinine clearance should be measured whenever possible in AKI and that the serum creatinine concentration and creatinine clearance remain the best clinical indicators of renal function. Neither the RIFLE, AKIN, nor KDIGO criteria estimate actual creatinine clearance. Furthermore there are no accepted methods for accurately estimating creatinine clearance (K) in AKI. The present study describes a unique method for estimating K in AKI using urine creatinine excretion over an established time interval (E), an estimate of creatinine production over the same time interval (P), and the estimated static glomerular filtration rate (sGFR), at time zero, utilizing the CKD-EPI formula. Using these variables estimated creatinine clearance (Ke)=E/P * sGFR. The method was tested for validity using simulated patients where actual creatinine clearance (Ka) was compared to Ke in several patients, both male and female, and of various ages, body weights, and degrees of renal impairment. These measurements were made at several serum creatinine concentrations in an attempt to determine the accuracy of this method in the non-steady state. In addition E/P and Ke was calculated in hospitalized patients, with AKI, and seen in nephrology consultation by the author. In these patients the accuracy of the method was determined by looking at the following metrics; E/P>1, E/P<1, E=P in an attempt to predict progressive azotemia, recovering azotemia, or stabilization in the level of azotemia respectively. In addition it was determined whether Ke<10 ml/min agreed with Ka and whether patients with AKI on renal replacement therapy could safely terminate dialysis if Ke was greater than 5 ml/min. In the simulated patients there
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
Arsenault, Lisa N; Xu, Kathleen; Taveras, Elsie M; Hacker, Karen A
2014-01-01
Successful childhood obesity interventions frequently focus on behavioral modification and involve parents or family members. Parental confidence in supporting behavior change may be an element of successful family-based prevention efforts. We aimed to determine whether parents' own obesity-related behaviors were related to their confidence in supporting their child's achievement of obesity-related behavioral goals. Cross-sectional analyses of data collected at baseline of a randomized control trial testing a treatment intervention for obese children (n = 787) in primary care settings (n = 14). Five obesity-related behaviors (physical activity, screen time, sugar-sweetened beverage, sleep duration, fast food) were self-reported by parents for themselves and their child. Behaviors were dichotomized on the basis of achievement of behavioral goals. Five confidence questions asked how confident the parent was in helping their child achieve each goal. Logistic regression modeling high confidence was conducted with goal achievement and demographics as independent variables. Parents achieving physical activity or sleep duration goals were significantly more likely to be highly confident in supporting their child's achievement of those goals (physical activity, odds ratio 1.76; 95% confidence interval 1.19-2.60; sleep, odds ratio 1.74; 95% confidence interval 1.09-2.79) independent of sociodemographic variables and child's current behavior. Parental achievements of TV watching and fast food goals were also associated with confidence, but significance was attenuated after child's behavior was included in models. Parents' own obesity-related behaviors are factors that may affect their confidence to support their child's behavior change. Providers seeking to prevent childhood obesity should address parent/family behaviors as part of their obesity prevention strategies. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Optimal Measurement Interval for Emergency Department Crowding Estimation Tools.
Wang, Hao; Ojha, Rohit P; Robinson, Richard D; Jackson, Bradford E; Shaikh, Sajid A; Cowden, Chad D; Shyamanand, Rath; Leuck, JoAnna; Schrader, Chet D; Zenarosa, Nestor R
2017-11-01
Emergency department (ED) crowding is a barrier to timely care. Several crowding estimation tools have been developed to facilitate early identification of and intervention for crowding. Nevertheless, the ideal frequency is unclear for measuring ED crowding by using these tools. Short intervals may be resource intensive, whereas long ones may not be suitable for early identification. Therefore, we aim to assess whether outcomes vary by measurement interval for 4 crowding estimation tools. Our eligible population included all patients between July 1, 2015, and June 30, 2016, who were admitted to the JPS Health Network ED, which serves an urban population. We generated 1-, 2-, 3-, and 4-hour ED crowding scores for each patient, using 4 crowding estimation tools (National Emergency Department Overcrowding Scale [NEDOCS], Severely Overcrowded, Overcrowded, and Not Overcrowded Estimation Tool [SONET], Emergency Department Work Index [EDWIN], and ED Occupancy Rate). Our outcomes of interest included ED length of stay (minutes) and left without being seen or eloped within 4 hours. We used accelerated failure time models to estimate interval-specific time ratios and corresponding 95% confidence limits for length of stay, in which the 1-hour interval was the reference. In addition, we used binomial regression with a log link to estimate risk ratios (RRs) and corresponding confidence limit for left without being seen. Our study population comprised 117,442 patients. The time ratios for length of stay were similar across intervals for each crowding estimation tool (time ratio=1.37 to 1.30 for NEDOCS, 1.44 to 1.37 for SONET, 1.32 to 1.27 for EDWIN, and 1.28 to 1.23 for ED Occupancy Rate). The RRs of left without being seen differences were also similar across intervals for each tool (RR=2.92 to 2.56 for NEDOCS, 3.61 to 3.36 for SONET, 2.65 to 2.40 for EDWIN, and 2.44 to 2.14 for ED Occupancy Rate). Our findings suggest limited variation in length of stay or left without being
Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.
2016-01-01
One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.
Brewer, Neil; Wells, Gary L
2006-03-01
Discriminating accurate from mistaken eyewitness identifications is a major issue facing criminal justice systems. This study examined whether eyewitness confidence assists such decisions under a variety of conditions using a confidence-accuracy (CA) calibration approach. Participants (N = 1,200) viewed a simulated crime and attempted 2 separate identifications from 8-person target-present or target-absent lineups. Confidence and accuracy were calibrated for choosers (but not nonchoosers) for both targets under all conditions. Lower overconfidence was associated with higher diagnosticity, lower target-absent base rates, and shorter identification latencies. Although researchers agree that courtroom expressions of confidence are uninformative, our findings indicate that confidence assessments obtained immediately after a positive identification can provide a useful guide for investigators about the likely accuracy of an identification.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.
Favazzo, Lacey; Willford, John D.; Watson, Rachel M.
2014-01-01
Knowledge surveys are a type of confidence survey in which students rate their confidence in their ability to answer questions rather than answering the questions. These surveys have been discussed as a tool to evaluate student in-class or curriculum-wide learning. However, disagreement exists as to whether confidence is actually an accurate measure of knowledge. With the concomitant goals of assessing content-based learning objectives and addressing this disagreement, we present herein a pretest/posttest knowledge survey study that demonstrates a significant difference correctness on graded test questions at different levels of reported confidence in a multi-semester timeframe. Questions were organized into Bloom’s taxonomy, allowing for the data collected to further provide statistical analyses on strengths and deficits in various levels of Bloom’s reasoning with regard to mean correctness. Collectively, students showed increasing confidence and correctness in all levels of thought but struggled with synthesis-level questions. However, when students were only asked to rate confidence and not answer the accompanying test questions, they reported significantly higher confidence than the control group which was asked to do both. This indicates that when students do not attempt to answer questions, they have significantly greater confidence in their ability to answer those questions. Additionally, when students rate only confidence without answering the question, resolution across Bloom’s levels of reasoning is lost. Based upon our findings, knowledge surveys can be an effective tool for assessment of both breadth and depth of knowledge, but may require students to answer questions in addition to rating confidence to provide the most accurate data. PMID:25574291
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
ERIC Educational Resources Information Center
Roderer, Thomas; Roebers, Claudia M.
2010-01-01
In the present study, primary school children's ability to give accurate confidence judgments (CJ) was addressed, with a special focus on uncertainty monitoring. In order to investigate the effects of memory retrieval processes on monitoring judgments, item difficulty in a vocabulary learning task (Japanese symbols) was manipulated. Moreover, as a…
Austin, Zubin
2013-01-01
Background: Despite the changing role of the pharmacist in patient-centred practice, pharmacists anecdotally reported little confidence in their clinical decision-making skills and do not feel responsible for their patients. Observational findings have suggested these trends within the profession, but there is a paucity of evidence to explain why. We conducted an exploratory study with an objective to identify reasons for the lack of responsibility and/or confidence in various pharmacy practice settings. Methods: Pharmacist interviews were conducted via written response, face-to-face or telephone. Seven questions were asked on the topic of responsibility and confidence as it applies to pharmacy practice and how pharmacists think these themes differ in medicine. Interview transcripts were analyzed and divided by common theme. Quotations to support these themes are presented. Results: Twenty-nine pharmacists were asked to participate, and 18 responded (62% response rate). From these interviews, 6 themes were identified as barriers to confidence and responsibility: hierarchy of the medical system, role definitions, evolution of responsibility, ownership of decisions for confidence building, quality and consequences of mentorship and personality traits upon admission. Discussion: We identified 6 potential barriers to the development of pharmacists’ self-confidence and responsibility. These findings have practical applicability for educational research, future curriculum changes, experiential learning structure and pharmacy practice. Due to bias and the limitations of this form of exploratory research and small sample size, evidence should be interpreted cautiously. Conclusion: Pharmacists feel neither responsible nor confident for their clinical decisions due to social, educational, experiential and personal reasons. Can Pharm J 2013;146:155-161. PMID:23795200
ERIC Educational Resources Information Center
McCabe, David P.; Soderstrom, Nicholas C.
2011-01-01
Five experiments were conducted to examine whether the nature of the information that is monitored during prospective metamemory judgments affected the relative accuracy of those judgments. We compared item-by-item judgments of learning (JOLs), which involved participants determining how confident they were that they would remember studied items,…
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Pereira, Gavin; Jacoby, Peter; de Klerk, Nicholas; Stanley, Fiona J
2014-01-01
Objective To re-evaluate the causal effect of interpregnancy interval on adverse birth outcomes, on the basis that previous studies relying on between mother comparisons may have inadequately adjusted for confounding by maternal risk factors. Design Retrospective cohort study using conditional logistic regression (matching two intervals per mother so each mother acts as her own control) to model the incidence of adverse birth outcomes as a function of interpregnancy interval; additional unconditional logistic regression with adjustment for confounders enabled comparison with the unmatched design of previous studies. Setting Perth, Western Australia, 1980-2010. Participants 40 441 mothers who each delivered three liveborn singleton neonates. Main outcome measures Preterm birth (<37 weeks), small for gestational age birth (<10th centile of birth weight by sex and gestational age), and low birth weight (<2500 g). Results Within mother analysis of interpregnancy intervals indicated a much weaker effect of short intervals on the odds of preterm birth and low birth weight compared with estimates generated using a traditional between mother analysis. The traditional unmatched design estimated an adjusted odds ratio for an interpregnancy interval of 0-5 months (relative to the reference category of 18-23 months) of 1.41 (95% confidence interval 1.31 to 1.51) for preterm birth, 1.26 (1.15 to 1.37) for low birth weight, and 0.98 (0.92 to 1.06) for small for gestational age birth. In comparison, the matched design showed a much weaker effect of short interpregnancy interval on preterm birth (odds ratio 1.07, 0.86 to 1.34) and low birth weight (1.03, 0.79 to 1.34), and the effect for small for gestational age birth remained small (1.08, 0.87 to 1.34). Both the unmatched and matched models estimated a high odds of small for gestational age birth and low birth weight for long interpregnancy intervals (longer than 59 months), but the estimated effect of long interpregnancy
Approaches for the accurate definition of geological time boundaries
NASA Astrophysics Data System (ADS)
Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo
2015-04-01
Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).
Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew
2016-09-01
This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.
Xie, Bin; Yan, Xianfeng
2017-01-01
Purpose. The aim of this study was to compare the effects of high-intensity interval training (INTERVAL) and moderate-intensity continuous training (CONTINUOUS) on aerobic capacity in cardiac patients. Methods. A meta-analysis identified by searching the PubMed, Cochrane Library, EMBASE, and Web of Science databases from inception through December 2016 compared the effects of INTERVAL and CONTINUOUS among cardiac patients. Results. Twenty-one studies involving 736 participants with cardiac diseases were included. Compared with CONTINUOUS, INTERVAL was associated with greater improvement in peak VO2 (mean difference 1.76 mL/kg/min, 95% confidence interval 1.06 to 2.46 mL/kg/min, p < 0.001) and VO2 at AT (mean difference 0.90 mL/kg/min, 95% confidence interval 0.0 to 1.72 mL/kg/min, p = 0.03). No significant difference between the INTERVAL and CONTINUOUS groups was observed in terms of peak heart rate, peak minute ventilation, VE/VCO2 slope and respiratory exchange ratio, body mass, systolic or diastolic blood pressure, triglyceride or low- or high-density lipoprotein cholesterol level, flow-mediated dilation, or left ventricular ejection fraction. Conclusions. This study showed that INTERVAL improves aerobic capacity more effectively than does CONTINUOUS in cardiac patients. Further studies with larger samples are needed to confirm our observations. PMID:28386556
Exact intervals and tests for median when one sample value possibly an outliner
NASA Technical Reports Server (NTRS)
Keller, G. J.; Walsh, J. E.
1973-01-01
Available are independent observations (continuous data) that are believed to be a random sample. Desired are distribution-free confidence intervals and significance tests for the population median. However, there is the possibility that either the smallest or the largest observation is an outlier. Then, use of a procedure for rejection of an outlying observation might seem appropriate. Such a procedure would consider that two alternative situations are possible and would select one of them. Either (1) the n observations are truly a random sample, or (2) an outlier exists and its removal leaves a random sample of size n-1. For either situation, confidence intervals and tests are desired for the median of the population yielding the random sample. Unfortunately, satisfactory rejection procedures of a distribution-free nature do not seem to be available. Moreover, all rejection procedures impose undesirable conditional effects on the observations, and also, can select the wrong one of the two above situations. It is found that two-sided intervals and tests based on two symmetrically located order statistics (not the largest and smallest) of the n observations have this property.
Effect Sizes and their Intervals: The Two-Level Repeated Measures Case
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2005-01-01
Probability coverage for eight different confidence intervals (CIs) of measures of effect size (ES) in a two-level repeated measures design was investigated. The CIs and measures of ES differed with regard to whether they used least squares or robust estimates of central tendency and variability, whether the end critical points of the interval…
Fransen, K; Steffens, N K; Haslam, S A; Vanbeselaere, N; Vande Broek, G; Boen, F
2016-12-01
The present research examines the impact of leaders' confidence in their team on the team confidence and performance of their teammates. In an experiment involving newly assembled soccer teams, we manipulated the team confidence expressed by the team leader (high vs neutral vs low) and assessed team members' responses and performance as they unfolded during a competition (i.e., in a first baseline session and a second test session). Our findings pointed to team confidence contagion such that when the leader had expressed high (rather than neutral or low) team confidence, team members perceived their team to be more efficacious and were more confident in the team's ability to win. Moreover, leaders' team confidence affected individual and team performance such that teams led by a highly confident leader performed better than those led by a less confident leader. Finally, the results supported a hypothesized mediational model in showing that the effect of leaders' confidence on team members' team confidence and performance was mediated by the leader's perceived identity leadership and members' team identification. In conclusion, the findings of this experiment suggest that leaders' team confidence can enhance members' team confidence and performance by fostering members' identification with the team. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sauerland, Melanie; Sagana, Anna; Sporer, Siegfried L
2012-10-01
While recent research has shown that the accuracy of positive identification decisions can be assessed via confidence and decision times, gauging lineup rejections has been less successful. The current study focused on 2 different aspects which are inherent in lineup rejections. First, we hypothesized that decision times and confidence ratings should be postdictive of identification rejections if they refer to a single lineup member only. Second, we hypothesized that dividing nonchoosers according to the reasons they provided for their decisions can serve as a useful postdictor for nonchoosers' accuracy. To test these assumptions, we used (1) 1-person lineups (showups) in order to obtain confidence and response time measures referring to a single lineup member, and (2) asked nonchoosers about their reasons for making a rejection. Three hundred and eighty-four participants were asked to identify 2 different persons after watching 1 of 2 stimulus films. The results supported our hypotheses. Nonchoosers' postdecision confidence ratings were well-calibrated. Likewise, we successfully established optimum time and confidence boundaries for nonchoosers. Finally, combinations of postdictors increased the number of accurate classifications compared with individual postdictors. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Exact nonparametric confidence bands for the survivor function.
Matthews, David
2013-10-12
A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.
... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body ... Videos for Educators Search English Español Raising Confident Kids KidsHealth / For Parents / Raising Confident Kids What's in ...
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
Dynamic visual noise reduces confidence in short-term memory for visual information.
Kemps, Eva; Andrade, Jackie
2012-05-01
Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.
Confidence to cook vegetables and the buying habits of Australian households.
Winkler, Elisabeth; Turrell, Gavin
2009-10-01
Cooking skills are emphasized in nutrition promotion but their distribution among population subgroups and relationship to dietary behavior is researched by few population-based studies. This study examined the relationships between confidence to cook, sociodemographic characteristics, and household vegetable purchasing. This cross-sectional study of 426 randomly selected households in Brisbane, Australia, used a validated questionnaire to assess household vegetable purchasing habits and the confidence to cook of the person who most often prepares food for these households. The mutually adjusted odds ratios (ORs) of lacking confidence to cook were assessed across a range of demographic subgroups using multiple logistic regression models. Similarly, mutually adjusted mean vegetable purchasing scores were calculated using multiple linear regression for different population groups and for respondents with varying confidence levels. Lacking confidence to cook using a variety of techniques was more common among respondents with less education (OR 3.30; 95% confidence interval [CI] 1.01 to 10.75) and was less common among respondents who lived with minors (OR 0.22; 95% CI 0.09 to 0.53) and other adults (OR 0.43; 95% CI 0.24 to 0.78). Lack of confidence to prepare vegetables was associated with being male (OR 2.25; 95% CI 1.24 to 4.08), low education (OR 6.60; 95% CI 2.08 to 20.91), lower household income (OR 2.98; 95% CI 1.02 to 8.72) and living with other adults (OR 0.53; 95% CI 0.29 to 0.98). Households bought a greater variety of vegetables on a regular basis when the main chef was confident to prepare them (difference: 18.60; 95% CI 14.66 to 22.54), older (difference: 8.69; 95% CI 4.92 to 12.47), lived with at least one other adult (difference: 5.47; 95% CI 2.82 to 8.12) or at least one minor (difference: 2.86; 95% CI 0.17 to 5.55). Cooking skills may contribute to socioeconomic dietary differences, and may be a useful strategy for promoting fruit and vegetable
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially
Brinkman, David J; Tichelaar, Jelle; van Agtmael, Michiel A; de Vries, Theo P G M; Richir, Milan C
2015-07-01
The objective of this study was to investigate the relationship between students' self-reported confidence and their objectively assessed competence in prescribing. We assessed the competence in several prescribing skills of 403 fourth-year medical students at the VU University Medical Center, the Netherlands, in a formative simulated examination on a 10-point scale (1 = very low; 10 = very high). Afterwards, the students were asked to rate their confidence in performing each of the prescribing skills on a 5-point Likert scale (1 = very unsure; 5 = very confident). Their assessments were then compared with their self-confidence ratings. Students' overall prescribing performance was adequate (7.0 ± 0.8), but they lacked confidence in 2 essential prescribing skills. Overall, there was a weak positive correlation (r = 0.2, P < .01, 95%CI 0.1-0.3) between reported confidence and actual competence. Therefore, this study suggests that self-reported confidence is not an accurate measure of prescribing competence, and that students lack insight into their own strengths and weaknesses in prescribing. Future studies should focus on developing validated and reliable instruments so that students can assess their prescribing skills. © 2015, The American College of Clinical Pharmacology.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
How accurate are resting energy expenditure prediction equations in obese trauma and burn patients?
Stucky, Chee-Chee H; Moncure, Michael; Hise, Mary; Gossage, Clint M; Northrop, David
2008-01-01
While the prevalence of obesity continues to increase in our society, outdated resting energy expenditure (REE) prediction equations may overpredict energy requirements in obese patients. Accurate feeding is essential since overfeeding has been demonstrated to adversely affect outcomes. The first objective was to compare REE calculated by prediction equations to the measured REE in obese trauma and burn patients. Our hypothesis was that an equation using fat-free mass would give a more accurate prediction. The second objective was to consider the effect of a commonly used injury factor on the predicted REE. A retrospective chart review was performed on 28 patients. REE was measured using indirect calorimetry and compared with the Harris-Benedict and Cunningham equations, and an equation using type II diabetes as a factor. Statistical analyses used were paired t test, +/-95% confidence interval, and the Bland-Altman method. Measured average REE in trauma and burn patients was 21.37 +/- 5.26 and 21.81 +/- 3.35 kcal/kg/d, respectively. Harris-Benedict underpredicted REE in trauma and burn patients to the least extent, while the Cunningham equation underpredicted REE in both populations to the greatest extent. Using an injury factor of 1.2, Cunningham continued to underestimate REE in both populations, while the Harris-Benedict and Diabetic equations overpredicted REE in both populations. The measured average REE is significantly less than current guidelines. This finding suggests that a hypocaloric regimen is worth considering for ICU patients. Also, if an injury factor of 1.2 is incorporated in certain equations, patients may be given too many calories.
Change in Breast Cancer Screening Intervals Since the 2009 USPSTF Guideline.
Wernli, Karen J; Arao, Robert F; Hubbard, Rebecca A; Sprague, Brian L; Alford-Teaster, Jennifer; Haas, Jennifer S; Henderson, Louise; Hill, Deidre; Lee, Christoph I; Tosteson, Anna N A; Onega, Tracy
2017-08-01
In 2009, the U.S. Preventive Services Task Force (USPSTF) recommended biennial mammography for women aged 50-74 years and shared decision-making for women aged 40-49 years for breast cancer screening. We evaluated changes in mammography screening interval after the 2009 recommendations. We conducted a prospective cohort study of women aged 40-74 years who received 821,052 screening mammograms between 2006 and 2012 using data from the Breast Cancer Surveillance Consortium. We compared changes in screening intervals and stratified intervals based on whether the mammogram at the end of the interval occurred before or after the 2009 recommendation. Differences in mean interval length by woman-level characteristics were compared using linear regression. The mean interval (in months) minimally decreased after the 2009 USPSTF recommendations. Among women aged 40-49 years, the mean interval decreased from 17.2 months to 17.1 months (difference -0.16%, 95% confidence interval [CI] -0.30 to -0.01). Similar small reductions were seen for most age groups. The largest change in interval length in the post-USPSTF period was declines among women with a first-degree family history of breast cancer (difference -0.68%, 95% CI -0.82 to -0.54) or a 5-year breast cancer risk ≥2.5% (difference -0.58%, 95% CI -0.73 to -0.44). The 2009 USPSTF recommendation did not lengthen the average mammography interval among women routinely participating in mammography screening. Future studies should evaluate whether breast cancer screening intervals lengthen toward biennial intervals following new national 2016 breast cancer screening recommendations, particularly among women less than 50 years of age.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Aagten-Murphy, David; Cappagli, Giulia; Burr, David
2014-03-01
Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to
Feng, Dai; Cortese, Giuliana; Baumgartner, Richard
2017-12-01
The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.
The idiosyncratic nature of confidence
Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador
2017-01-01
Confidence is the ‘feeling of knowing’ that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence. PMID:29152591
Forensic use of the Greulich and Pyle atlas: prediction intervals and relevance.
Chaumoitre, K; Saliba-Serre, B; Adalian, P; Signoli, M; Leonetti, G; Panuel, M
2017-03-01
The Greulich and Pyle (GP) atlas is one of the most frequently used methods of bone age (BA) estimation. Our aim is to assess its accuracy and to calculate the prediction intervals at 95% for forensic use. The study was conducted on a multi-ethnic sample of 2614 individuals (1423 boys and 1191 girls) referred to the university hospital of Marseille (France) for simple injuries. Hand radiographs were analysed using the GP atlas. Reliability of GP atlas and agreement between BA and chronological age (CA) were assessed and prediction intervals at 95% were calculated. The repeatability was excellent and the reproducibility was good. Pearson's linear correlation coefficient between CA and BA was 0.983. The mean difference between BA and CA was -0.18 years (boys) and 0.06 years (girls). The prediction interval at 95% for CA was given for each GP category and ranged between 1.2 and more than 4.5 years. The GP atlas is a reproducible and repeatable method that is still accurate for the present population, with a high correlation between BA and CA. The prediction intervals at 95% are wide, reflecting individual variability, and should be known when the method is used in forensic cases. • The GP atlas is still accurate at the present time. • There is a high correlation between bone age and chronological age. • Individual variability must be known when GP is used in forensic cases. • Prediction intervals (95%) are large; around 4 years after 10 year olds.
Memory for time and place contributes to enhanced confidence in memories for emotional events
Rimmele, Ulrike; Davachi, Lila; Phelps, Elizabeth A.
2012-01-01
Emotion strengthens the subjective sense of remembering. However, these confidently remembered emotional memories have not been found be more accurate for some types of contextual details. We investigated whether the subjective sense of recollecting negative stimuli is coupled with enhanced memory accuracy for three specific types of central contextual details using the remember/know paradigm and confidence ratings. Our results indicate that the subjective sense of remembering is indeed coupled with better recollection of spatial location and temporal context. In contrast, we found a double-dissociation between the subjective sense of remembering and memory accuracy for colored dots placed in the conceptual center of negative and neutral scenes. These findings show that the enhanced subjective recollective experience for negative stimuli reliably indicates objective recollection for spatial location and temporal context, but not for other types of details, whereas for neutral stimuli, the subjective sense of remembering is coupled with all the types of details assessed. Translating this finding to flashbulb memories, we found that, over time, more participants correctly remembered the location where they learned about the terrorist attacks on 9/11 than any other canonical feature. Likewise participants’ confidence was higher in their memory for location vs. other canonical features. These findings indicate that the strong recollective experience of a negative event corresponds to an accurate memory for some kinds of contextual details, but not other kinds. This discrepancy provides further evidence that the subjective sense of remembering negative events is driven by a different mechanism than the subjective sense of remembering neutral events. PMID:22642353
Neural correlates of metacognitive ability and of feeling confident: a large-scale fMRI study.
Molenberghs, Pascal; Trautwein, Fynn-Mathis; Böckler, Anne; Singer, Tania; Kanske, Philipp
2016-12-01
One important aspect of metacognition is the ability to accurately evaluate one's performance. People vary widely in their metacognitive ability and in general are too confident when evaluating their performance. This often leads to poor decision making with potentially disastrous consequences. To further our understanding of the neural underpinnings of these processes, this fMRI study investigated inter-individual differences in metacognitive ability and effects of trial-by-trial variation in subjective feelings of confidence when making metacognitive assessments. Participants (N = 308) evaluated their performance in a high-level social and cognitive reasoning task. The results showed that higher metacognitive accuracy was associated with a decrease in activation in the anterior medial prefrontal cortex, an area previously linked to metacognition on perception and memory. Moreover, the feeling of confidence about one's choices was associated with an increase of activation in reward, memory and motor related areas including bilateral striatum and hippocampus, while less confidence was associated with activation in areas linked with negative affect and uncertainty, including dorsomedial prefrontal and bilateral orbitofrontal cortex. This might indicate that positive affect is related to higher confidence thereby biasing metacognitive decisions towards overconfidence. In support, behavioural analyses revealed that increased confidence was associated with lower metacognitive accuracy. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
The Use of One-Sample Prediction Intervals for Estimating CO2 Scrubber Canister Durations
2012-10-01
Grade and 812 D-Grade Sofnolime.3 Definitions According to Devore,4 A CI (confidence interval) refers to a parameter, or population ... characteristic , whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this
Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J
2015-01-01
This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.
ERIC Educational Resources Information Center
Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio
2009-01-01
A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…
ERIC Educational Resources Information Center
Paek, Insu
2016-01-01
The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…
Neurophysiology of perceived confidence.
Graziano, Martin; Parra, Lucas C; Sigman, Mariano
2010-01-01
In a partial report paradigm, subjects observe during a brief presentation a cluttered field and after some time - typically ranging from 100 ms to a second - are asked to report a subset of the presented elements. A vast buffer of information is transiently available to be broadcasted which, if not retrieved in time, fades rapidly without reaching consciousness. An interesting feature of this experiment is that objective performance and subjective confidence is decoupled. This converts this paradigm in an ideal vehicle to understand the brain dynamics of the construction of confidence. Here we report a high-density EEG experiment in which we infer elements of the EEG response which are indicative of subjective confidence. We find that an early response during encoding partially correlates with perceived confidence. However, the bulk of the weight of subjective confidence is determined during a late, N400-like waveform, during the retrieval stage. This shows that we can find markers of access to internal, subjective states, that are uncoupled from objective response and stimulus properties of the task, and we propose that this can be used with decoding methods of EEG to infer subjective mental states.
Confidence in Altman-Bland plots: a critical review of the method of differences.
Ludbrook, John
2010-02-01
1. Altman and Bland argue that the virtue of plotting differences against averages in method-comparison studies is that 95% confidence limits for the differences can be constructed. These allow authors and readers to judge whether one method of measurement could be substituted for another. 2. The technique is often misused. So I have set out, by statistical argument and worked examples, to advise pharmacologists and physiologists how best to construct these limits. 3. First, construct a scattergram of differences on averages, then calculate the line of best fit for the linear regression of differences on averages. If the slope of the regression is shown to differ from zero, there is proportional bias. 4. If there is no proportional bias and if the scatter of differences is uniform (homoscedasticity), construct 'classical' 95% confidence limits. 5. If there is proportional bias yet homoscedasticity, construct hyperbolic 95% confidence limits (prediction interval) around the line of best fit. 6. If there is proportional bias and the scatter of values for differences increases progressively as the average values increase (heteroscedasticity), log-transform the raw values from the two methods and replot differences against averages. If this eliminates proportional bias and heteroscedasticity, construct 'classical' 95% confidence limits. Otherwise, construct horizontal V-shaped 95% confidence limits around the line of best fit of differences on averages or around the weighted least products line of best fit to the original data. 7. In designing a method-comparison study, consult a qualified biostatistician, obey the rules of randomization and make replicate observations.
Perri, Amanda M.; O’Sullivan, Terri L.; Harding, John C.S.; Wood, R. Darren; Friendship, Robert M.
2017-01-01
The evaluation of pig hematology and biochemistry parameters is rarely done largely due to the costs associated with laboratory testing and labor, and the limited availability of reference intervals needed for interpretation. Within-herd and between-herd biological variation of these values also make it difficult to establish reference intervals. Regardless, baseline reference intervals are important to aid veterinarians in the interpretation of blood parameters for the diagnosis and treatment of diseased swine. The objective of this research was to provide reference intervals for hematology and biochemistry parameters of 3-week-old commercial nursing piglets in Ontario. A total of 1032 pigs lacking clinical signs of disease from 20 swine farms were sampled for hematology and iron panel evaluation, with biochemistry analysis performed on a subset of 189 randomly selected pigs. The 95% reference interval, mean, median, range, and 90% confidence intervals were calculated for each parameter. PMID:28373729
Exploring separable components of institutional confidence.
Hamm, Joseph A; PytlikZillig, Lisa M; Tomkins, Alan J; Herian, Mitchel N; Bornstein, Brian H; Neeley, Elizabeth M
2011-01-01
Despite its contemporary and theoretical importance in numerous social scientific disciplines, institutional confidence research is limited by a lack of consensus regarding the distinctions and relationships among related constructs (e.g., trust, confidence, legitimacy, distrust, etc.). This study examined four confidence-related constructs that have been used in studies of trust/confidence in the courts: dispositional trust, trust in institutions, obligation to obey the law, and cynicism. First, the separability of the four constructs was examined by exploratory factor analyses. Relationships among the constructs were also assessed. Next, multiple regression analyses were used to explore each construct's independent contribution to confidence in the courts. Finally, a second study replicated the first study and also examined the stability of the institutional confidence constructs over time. Results supported the hypothesized separability of, and correlations among, the four confidence-related constructs. The extent to which the constructs independently explained the observed variance in confidence in the courts differed as a function of the specific operationalization of confidence in the courts and the individual predictor measures. Implications for measuring institutional confidence and future research directions are discussed. Copyright © 2011 John Wiley & Sons, Ltd.
González-García, Nadia; Rendón, Pablo L
2017-05-23
The neural correlates of consonance and dissonance perception have been widely studied, but not the neural correlates of consonance and dissonance production. The most straightforward manner of musical production is singing, but, from an imaging perspective, it still presents more challenges than listening because it involves motor activity. The accurate singing of musical intervals requires integration between auditory feedback processing and vocal motor control in order to correctly produce each note. This protocol presents a method that permits the monitoring of neural activations associated with the vocal production of consonant and dissonant intervals. Four musical intervals, two consonant and two dissonant, are used as stimuli, both for an auditory discrimination test and a task that involves first listening to and then reproducing given intervals. Participants, all female vocal students at the conservatory level, were studied using functional Magnetic Resonance Imaging (fMRI) during the performance of the singing task, with the listening task serving as a control condition. In this manner, the activity of both the motor and auditory systems was observed, and a measure of vocal accuracy during the singing task was also obtained. Thus, the protocol can also be used to track activations associated with singing different types of intervals or with singing the required notes more accurately. The results indicate that singing dissonant intervals requires greater participation of the neural mechanisms responsible for the integration of external feedback from the auditory and sensorimotor systems than does singing consonant intervals.
Li, L; Feng, D X; Wu, J
2016-10-01
It is a difficult problem of forensic medicine to accurately estimate the post-mortem interval. Entomological approach has been regarded as an effective way to estimate the post-mortem interval. The developmental biology of carrion-breeding flies has an important position at the post-mortem interval estimation. Phorid flies are tiny and occur as the main or even the only insect evidence in relatively enclosed environments. This paper reviews the research progress of carrion-breeding phorid flies for estimating post-mortem interval in forensic medicine which includes their roles, species identification and age determination of immatures. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-01-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649
Confidence sharing: an economic strategy for efficient information flows in animal groups.
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-10-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLoughlin, Kevin
2016-01-11
This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive featuremore » of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.« less
Parent's confidence as a caregiver.
Raines, Deborah A; Brustad, Judith
2012-06-01
The purpose of this study was to describe the parent's self-reported confidence as a caregiver. The specific research questions were as follows: • What is the parent's perceived level of confidence when performing infant caregiving activities in the neonatal intensive care unit (NICU)? • What is the parent's projected level of confidence about performing infant caregiving activities on the first day at home? Participants were parents of infants with an anticipated discharge date within 5 days. Inclusion criteria were as follows: parent at least 18 years of age, infant's discharge destination is home with the parent, parent will have primary responsibility for the infant after discharge, and the infant's length of stay in the NICU was a minimum of 10 days. Descriptive, survey research. Participants perceived themselves to be confident in all but 2 caregiving activities when caring for their infants in the NICU, but parents projected a change in their level of confidence in their ability to independently complete infant care activities at home. When comparing the self-reported level of confidence in the NICU and the projected level of confidence at home, the levels of confidence decreased for 5 items, increased for 8 items, and remained unchanged for 2 items. All of the items with a decrease in score were the items with the lowest score when performed in the NICU. All of these low-scoring items are caregiving activities that are unique to the post-NICU status of the infant. Interestingly, the parent's projected level of confidence increased for the 8 items focused on handling and interacting with the infant. The findings of this research provide evidence that nurses may need to rethink when parents become active participants in their infant's medical-based caregiving activities.
Brett, Benjamin L; Smyk, Nathan; Solomon, Gary; Baughman, Brandon C; Schatz, Philip
2016-08-18
The ImPACT (Immediate Post-Concussion Assessment and Cognitive Testing) neurocognitive testing battery is a widely used tool used for the assessment and management of sports-related concussion. Research on the stability of ImPACT in high school athletes at a 1- and 2-year intervals have been inconsistent, requiring further investigation. We documented 1-, 2-, and 3-year test-retest reliability of repeated ImPACT baseline assessments in a sample of high school athletes, using multiple statistical methods for examining stability. A total of 1,510 high school athletes completed baseline cognitive testing using online ImPACT test battery at three time periods of approximately 1- (N = 250), 2- (N = 1146), and 3-year (N = 114) intervals. No participant sustained a concussion between assessments. Intraclass correlation coefficients (ICCs) ranged in composite scores from 0.36 to 0.90 and showed little change as intervals between assessments increased. Reliable change indices and regression-based measures (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the various time intervals, with very few cases (0%-6%) falling outside of 95% confidence intervals. The results suggest ImPACT composites scores remain considerably stability across 1-, 2-, and 3-year test-retest intervals in high school athletes, when considering both ICCs and RBM. Annually ascertaining baseline scores continues to be optimal for ensuring accurate and individualized management of injury for concussed athletes. For instances in which more recent baselines are not available (1-2 years), clinicians should seek to utilize more conservative range estimates in determining the presence of clinically meaningful change in cognitive performance. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Yang, Huiqin; Thompson, Carl; Bland, Martin
2012-12-01
Apparent overconfidence and underconfidence in clinicians making clinical judgements could be a feature of evaluative research designs that fail to accurately represent clinical environments. To test the effect of improved realism of clinical judgement tasks on confidence calibration performance of nurses and student nurses. A comparative confidence calibration analysis. The study was conducted in a large university of Northern England. Ninety-seven participants rated their confidence - using a scale that ranged from 0 (no confidence) to 100 (totally confident) on dichotomous clinical judgements of critical event risk. The judgements were in response to 25 paper-based and 25 higher fidelity scenarios using a computerised patient simulator and clinical equipment. Scenarios, and judgement criteria of 'correctness', were generated from real patient cases. Using a series of calibration measures (calibration, resolution and over/underconfidence), participants' confidence was calibrated against the proportion of correct judgements. The calibration measures generated by the paper-based and high fidelity clinical simulation conditions were compared. Participants made significantly less accurate clinical judgements of risk in the high fidelity clinical simulations compared to the paper simulations (P=0.0002). They were significantly less confident in high fidelity clinical simulations than paper simulations (P=0.03). However, there was no significant difference of over/underconfidence for participants between the two simulated settings (P=0.06). Participants were no better calibrated in the high fidelity clinical simulations than paper simulations, P=0.85. Likewise, participants had no better ability of discriminating correct judgements from incorrect judgements as measured by the resolution statistic in high fidelity clinical simulations than paper simulations, P=0.76. Improving the realism of simulated judgement tasks led to reduced confidence and judgement accuracy in
Dobolyi, David G; Dodson, Chad S
2013-12-01
Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Increasing Product Confidence-Shifting Paradigms.
Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew
2015-01-01
Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine.
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L; Balleteros, Francisco
2016-12-07
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets.
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L.; Balleteros, Francisco
2016-01-01
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets. PMID:27941604
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants
Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo
2017-01-01
Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework
NASA Technical Reports Server (NTRS)
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
The QT Interval and Risk of Incident Atrial Fibrillation
Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.
2013-01-01
BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693
Serial binary interval ratios improve rhythm reproduction.
Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao
2013-01-01
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.
Serial binary interval ratios improve rhythm reproduction
Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao
2013-01-01
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception. PMID:23964258
A new automatic blood pressure kit auscultates for accurate reading with a smartphone
Wu, Hongjun; Wang, Bingjian; Zhu, Xinpu; Chu, Guang; Zhang, Zhi
2016-01-01
Abstract The widely used oscillometric automated blood pressure (BP) monitor was continuously questioned on its accuracy. A novel BP kit named Accutension which adopted Korotkoff auscultation method was then devised. Accutension worked with a miniature microphone, a pressure sensor, and a smartphone. The BP values were automatically displayed on the smartphone screen through the installed App. Data recorded in the phone could be played back and reconfirmed after measurement. They could also be uploaded and saved to the iCloud. The accuracy and consistency of this novel electronic auscultatory sphygmomanometer was preliminarily verified here. Thirty-two subjects were included and 82 qualified readings were obtained. The mean differences ± SD for systolic and diastolic BP readings between Accutension and mercury sphygmomanometer were 0.87 ± 2.86 and −0.94 ± 2.93 mm Hg. Agreements between Accutension and mercury sphygmomanometer were highly significant for systolic (ICC = 0.993, 95% confidence interval (CI): 0.989–0.995) and diastolic (ICC = 0.987, 95% CI: 0.979–0.991). In conclusion, Accutension worked accurately based on our pilot study data. The difference was acceptable. ICC and Bland–Altman plot charts showed good agreements with manual measurements. Systolic readings of Accutension were slightly higher than those of manual measurement, while diastolic readings were slightly lower. One possible reason was that Accutension captured the first and the last korotkoff sound more sensitively than human ear during manual measurement and avoided sound missing, so that it might be more accurate than traditional mercury sphygmomanometer. By documenting and analyzing of variant tendency of BP values, Accutension helps management of hypertension and therefore contributes to the mobile heath service. PMID:27512876
The Relationship between Confidence and Self-Concept--Towards a Model of Response Confidence
ERIC Educational Resources Information Center
Kroner, Stephan; Biermann, Antje
2007-01-01
According to Stankov [Stankov, L. (2000). Complexity, metacognition and fluid intelligence. Intelligence, 28, 121-143.] response confidence in cognitive tests reflects a trait on the boundary of personality and abilities. However, several studies failed in relating confidence scores to other known traits, including self-concept. A model of…
Investigating the Genetic Architecture of the PR Interval Using Clinical Phenotypes.
Mosley, Jonathan D; Shoemaker, M Benjamin; Wells, Quinn S; Darbar, Dawood; Shaffer, Christian M; Edwards, Todd L; Bastarache, Lisa; McCarty, Catherine A; Thompson, Will; Chute, Christopher G; Jarvik, Gail P; Crosslin, David R; Larson, Eric B; Kullo, Iftikhar J; Pacheco, Jennifer A; Peissig, Peggy L; Brilliant, Murray H; Linneman, James G; Witte, John S; Denny, Josh C; Roden, Dan M
2017-04-01
One potential use for the PR interval is as a biomarker of disease risk. We hypothesized that quantifying the shared genetic architectures of the PR interval and a set of clinical phenotypes would identify genetic mechanisms contributing to PR variability and identify diseases associated with a genetic predictor of PR variability. We used ECG measurements from the ARIC study (Atherosclerosis Risk in Communities; n=6731 subjects) and 63 genetically modulated diseases from the eMERGE network (Electronic Medical Records and Genomics; n=12 978). We measured pairwise genetic correlations (rG) between PR phenotypes (PR interval, PR segment, P-wave duration) and each of the 63 phenotypes. The PR segment was genetically correlated with atrial fibrillation (rG=-0.88; P =0.0009). An analysis of metabolic phenotypes in ARIC also showed that the P wave was genetically correlated with waist circumference (rG=0.47; P =0.02). A genetically predicted PR interval phenotype based on 645 714 single-nucleotide polymorphisms was associated with atrial fibrillation (odds ratio=0.89 per SD change; 95% confidence interval, 0.83-0.95; P =0.0006). The differing pattern of associations among the PR phenotypes is consistent with analyses that show that the genetic correlation between the P wave and PR segment was not significantly different from 0 (rG=-0.03 [0.16]). The genetic architecture of the PR interval comprises modulators of atrial fibrillation risk and obesity. © 2017 American Heart Association, Inc.
Cust, Anne E; Armstrong, Bruce K; Smith, Ben J; Chau, Josephine; van der Ploeg, Hidde P; Bauman, Adrian
2009-05-01
Self-reported confidence ratings have been used in other research disciplines as a tool to assess data quality, and may be useful in epidemiologic studies. We examined whether self-reported confidence in recall of physical activity was a predictor of the validity and retest reliability of physical activity measures from the European Prospective Investigation into Cancer and Nutrition (EPIC) past-year questionnaire and the International Physical Activity Questionnaire (IPAQ) last-7-day questionnaire.During 2005-2006 in Sydney, Australia, 97 men and 80 women completed both questionnaires at baseline and at 10 months and wore an accelerometer as an objective comparison measure for three 7-day periods during the same timeframe. Participants rated their confidence in recalling physical activity for each question using a 5-point scale and were dichotomized at the median confidence value. Participants in the high-confidence group had higher validity and repeatability coefficients than those in the low-confidence group for most comparisons. The differences were most apparent for validity of IPAQ moderate activity: Spearman correlation rho = 0.34 (95% confidence interval [CI] = 0.08 to 0.55) and 0.01 (-0.17 to 0.20) for high- and low-confidence groups, respectively; and repeatability of EPIC household activity: rho = 0.81 (0.72 to 0.87) and 0.63 (0.48 to 0.74), respectively, and IPAQ vigorous activity: rho = 0.58 (0.43 to 0.70) and 0.29 (0.07 to 0.49), respectively. Women were less likely than men to report high recall confidence of past-year activity (adjusted odds ratio = 0.38; 0.18 to 0.80). Confidence ratings could be useful as indicators of recall accuracy (ie, validity and repeatability) of physical activity measures, and possibly for detecting differential measurement error and identifying questionnaire items that require improvement.
Motor onset and diagnosis in Huntington disease using the diagnostic confidence level.
Liu, Dawei; Long, Jeffrey D; Zhang, Ying; Raymond, Lynn A; Marder, Karen; Rosser, Anne; McCusker, Elizabeth A; Mills, James A; Paulsen, Jane S
2015-12-01
Huntington disease (HD) is a neurodegenerative disorder characterized by motor dysfunction, cognitive deterioration, and psychiatric symptoms, with progressive motor impairments being a prominent feature. The primary objectives of this study are to delineate the disease course of motor function in HD, to provide estimates of the onset of motor impairments and motor diagnosis, and to examine the effects of genetic and demographic variables on the progression of motor impairments. Data from an international multisite, longitudinal observational study of 905 prodromal HD participants with cytosine-adenine-guanine (CAG) repeats of at least 36 and with at least two visits during the followup period from 2001 to 2012 was examined for changes in the diagnostic confidence level from the Unified Huntington's Disease Rating Scale. HD progression from unimpaired to impaired motor function, as well as the progression from motor impairment to diagnosis, was associated with the linear effect of age and CAG repeat length. Specifically, for every 1-year increase in age, the risk of transition in diagnostic confidence level increased by 11% (95% CI 7-15%) and for one repeat length increase in CAG, the risk of transition in diagnostic confidence level increased by 47% (95% CI 27-69%). Findings show that CAG repeat length and age increased the likelihood of the first onset of motor impairment as well as the age at diagnosis. Results suggest that more accurate estimates of HD onset age can be obtained by incorporating the current status of diagnostic confidence level into predictive models.
Iron Metabolism Genes, Low-Level Lead Exposure, and QT Interval
Park, Sung Kyun; Hu, Howard; Wright, Robert O.; Schwartz, Joel; Cheng, Yawen; Sparrow, David; Vokonas, Pantel S.; Weisskopf, Marc G.
2009-01-01
Background Cumulative exposure to lead has been shown to be associated with depression of electrocardiographic conduction, such as QT interval (time from start of the Q wave to end of the T wave). Because iron can enhance the oxidative effects of lead, we examined whether polymorphisms in iron metabolism genes [hemochromatosis (HFE), transferrin (TF) C2, and heme oxygenase-1 (HMOX-1)] increase susceptibility to the effects of lead on QT interval in 613 community-dwelling older men. Methods We used standard 12-lead electrocardiograms, K-shell X-ray fluorescence, and graphite furnace atomic absorption spectrometry to measure QT interval, bone lead, and blood lead levels, respectively. Results A one-interquartile-range increase in tibia lead level (13 μg/g) was associated with a 11.35-msec [95% confidence interval (CI), 4.05–18.65 msec] and a 6.81-msec (95% CI, 1.67–11.95 msec) increase in the heart-rate–corrected QT interval among persons carrying long HMOX-1 alleles and at least one copy of an HFE variant, respectively, but had no effect in persons with short and middle HMOX-1 alleles and the wild-type HFE genotype. The lengthening of the heart-rate–corrected QT interval with higher tibia lead and blood lead became more pronounced as the total number (0 vs. 1 vs. ≥2) of gene variants increased (tibia, p-trend = 0.01; blood, p-trend = 0.04). This synergy seems to be driven by a joint effect between HFE variant and HMOX-1 L alleles. Conclusion We found evidence that gene variants related to iron metabolism increase the impacts of low-level lead exposure on the prolonged QT interval. This is the first such report, so these results should be interpreted cautiously and need to be independently verified. PMID:19165391
Kiran, Ravi P; Attaluri, Vikram; Hammel, Jeff; Church, James
2013-05-01
The ability to accurately predict postoperative mortality is expected to improve preoperative decisions for elderly patients considered for colorectal surgery. Patients undergoing colorectal surgery were identified from the National Surgical Quality Improvement Program database (2005-2007) and stratified as elderly (>70 years) and nonelderly (<70 years). Univariate analysis of preoperative risk factors and 30-day mortality and morbidity were analyzed on 70% of the population. A nomogram for mortality was created and tested on the remaining 30%. Of 30,900 colorectal cases, 10,750 were elderly (>70 years). Mortality increased steadily with age (0.5% every 5 years) and at a faster rate (1.2% every 5 years) after 70 years, which defined "elderly" in this study. Elderly (mean age: 78.4 years) and nonelderly patients (52.8 years) had mortality of 7.6% versus 2.0% and a morbidity of 32.8% versus 25.7%, respectively. Elderly patients had greater preoperative comorbidities including chronic obstructive pulmonary disease (10.5% vs 3.8%), diabetes (18.7% vs 11.1%), and renal insufficiency (1.7% vs 1.3%). A multivariate model for 30-day mortality and nomogram were created. Increasing age was associated with mortality [age >70 years: odds ratio (OR) = 2.0 (95% confidence interval (CI): 1.7-2.4); >85 years: OR = 4.3 (95% CI: 3.3-5.5)]. The nomogram accurately predicted mortality, including very high-risk (>50% mortality) with a concordant index for this model of 0.89. Colorectal surgery in elderly patients is associated with significantly higher mortality. This novel nomogram that predicts postoperative mortality may facilitate preoperative treatment decisions.
Sakamoto, Takuya; Imasaka, Ryohei; Taki, Hirofumi; Sato, Toru; Yoshioka, Mototaka; Inoue, Kenichi; Fukuda, Takeshi; Sakai, Hiroyuki
2016-04-01
The objectives of this paper are to propose a method that can accurately estimate the human heart rate (HR) using an ultrawideband (UWB) radar system, and to determine the performance of the proposed method through measurements. The proposed method uses the feature points of a radar signal to estimate the HR efficiently and accurately. Fourier- and periodicity-based methods are inappropriate for estimation of instantaneous HRs in real time because heartbeat waveforms are highly variable, even within the beat-to-beat interval. We define six radar waveform features that enable correlation processing to be performed quickly and accurately. In addition, we propose a feature topology signal that is generated from a feature sequence without using amplitude information. This feature topology signal is used to find unreliable feature points, and thus, to suppress inaccurate HR estimates. Measurements were taken using UWB radar, while simultaneously performing electrocardiography measurements in an experiment that was conducted on nine participants. The proposed method achieved an average root-mean-square error in the interbeat interval of 7.17 ms for the nine participants. The results demonstrate the effectiveness and accuracy of the proposed method. The significance of this study for biomedical research is that the proposed method will be useful in the realization of a remote vital signs monitoring system that enables accurate estimation of HR variability, which has been used in various clinical settings for the treatment of conditions such as diabetes and arterial hypertension.
Confidence in critical care nursing.
Evans, Jeanne; Bell, Jennifer L; Sweeney, Annemarie E; Morgan, Jennifer I; Kelly, Helen M
2010-10-01
The purpose of the study was to gain an understanding of the nursing phenomenon, confidence, from the experience of nurses in the nursing subculture of critical care. Leininger's theory of cultural care diversity and universality guided this qualitative descriptive study. Questions derived from the sunrise model were used to elicit nurses' perspectives about cultural and social structures that exist within the critical care nursing subculture and the influence that these factors have on confidence. Twenty-eight critical care nurses from a large Canadian healthcare organization participated in semistructured interviews about confidence. Five themes arose from the descriptions provided by the participants. The three themes, tenuously navigating initiation rituals, deliberately developing holistic supportive relationships, and assimilating clinical decision-making rules were identified as social and cultural factors related to confidence. The remaining two themes, preserving a sense of security despite barriers and accommodating to diverse challenges, were identified as environmental factors related to confidence. Practice and research implications within the culture of critical care nursing are discussed in relation to each of the themes.
Rastogi, L.; Dash, K.; Arunachalam, J.
2013-01-01
The quantitative analysis of glutathione (GSH) is important in different fields like medicine, biology, and biotechnology. Accurate quantitative measurements of this analyte have been hampered by the lack of well characterized reference standards. The proposed procedure is intended to provide an accurate and definitive method for the quantitation of GSH for reference measurements. Measurement of the stoichiometrically existing sulfur content in purified GSH offers an approach for its quantitation and calibration through an appropriate characterized reference material (CRM) for sulfur would provide a methodology for the certification of GSH quantity, that is traceable to SI (International system of units). The inductively coupled plasma optical emission spectrometry (ICP-OES) approach negates the need for any sample digestion. The sulfur content of the purified GSH is quantitatively converted into sulfate ions by microwave-assisted UV digestion in the presence of hydrogen peroxide prior to ion chromatography (IC) measurements. The measurement of sulfur by ICP-OES and IC (as sulfate) using the “high performance” methodology could be useful for characterizing primary calibration standards and certified reference materials with low uncertainties. The relative expanded uncertainties (% U) expressed at 95% confidence interval for ICP-OES analyses varied from 0.1% to 0.3%, while in the case of IC, they were between 0.2% and 1.2%. The described methods are more suitable for characterizing primary calibration standards and certifying reference materials of GSH, than for routine measurements. PMID:29403814
Confidence Region of Least Squares Solution for Single-Arc Observations
NASA Astrophysics Data System (ADS)
Principe, G.; Armellin, R.; Lewis, H.
2016-09-01
The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.
Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.
Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W
2016-02-26
Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.
A model for developing disability confidence.
Lindsay, Sally; Cancelliere, Sara
2017-05-15
Many clinicians, educators, and employers lack disability confidence which can affect their interactions with, and inclusion of people with disabilities. Our objective was to explore how disability confidence developed among youth who volunteered with children who have a disability. We conducted 30 in-depth interviews (16 without a disability, 14 with disabilities), with youth aged 15-25. We analyzed our data using an interpretive, qualitative, thematic approach. We identified four main themes that led to the progression of disability confidence including: (1) "disability discomfort," referring to lacking knowledge about disability and experiencing unease around people with disabilities; (2) "reaching beyond comfort zone" where participants increased their understanding of disability and became sensitized to difference; (3) "broadened perspectives" where youth gained exposure to people with disabilities and challenged common misperceptions and stereotypes; and (4) "disability confidence" which includes having knowledge of people with disabilities, inclusive, and positive attitudes towards them. Volunteering is one way that can help to develop disability confidence. Youth with and without disabilities both reported a similar process of developing disability confidence; however, there were nuances between the two groups. Implications for Rehabilitation The development of disability confidence is important for enhancing the social inclusion of people with disabilities. Volunteering with people who have a disability, or a disability different from their own, can help to develop disability confidence which involves positive attitudes, empathy, and appropriate communication skills. Clinicians, educators, and employers should consider promoting working with disabled people through such avenues as volunteering or service learning to gain disability confidence.
Healthy Lifestyle Fitness Interval training can help you get the most out of your workout. By Mayo Clinic Staff Are you ready to shake ... spending more time at the gym? Consider aerobic interval training. Once the domain of elite athletes, interval training ...
Epidemiology and the law: courts and confidence intervals.
Christoffel, T; Teret, S P
1991-01-01
Beginning with the swine flu litigation of the early 1980s, epidemiological evidence has played an increasingly prominent role in helping the nation's courts deal with alleged causal connections between plaintiffs' diseases or other harm and exposure to specific noxious agents (such as asbestos, toxic waste, radiation, and pharmaceuticals). Judicial reliance on epidemiology has high-lighted the contrast between the nature of scientific proof and of legal proof. Epidemiologists need to recognize and understand the growing involvement of their profession in complex tort litigation. PMID:1746668
Estimation of Confidence Intervals for Multiplication and Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J
2009-07-17
Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoreticalmore » count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.« less
Confidence Intervals for Omega Coefficient: Proposal for Calculus.
Ventura-León, José Luis
2018-01-01
La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.
CIMP status of interval colon cancers: another piece to the puzzle.
Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma
2010-05-01
Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1
Accurate forced-choice recognition without awareness of memory retrieval.
Voss, Joel L; Baym, Carol L; Paller, Ken A
2008-06-01
Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory.
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Zhang, Gao-Ming; Guo, Xu-Xiao; Ma, Xiao-Bo; Zhang, Guo-Ming
2016-12-12
BACKGROUND The aim of this study was to calculate 95% reference intervals and double-sided limits of serum alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) according to the CLSI EP28-A3 guideline. MATERIAL AND METHODS Serum AFP and CEA values were measured in samples from 26 000 healthy subjects in the Shuyang area receiving general health checkups. The 95% reference intervals and upper limits were calculated by using MedCalc. RESULTS We provided continuous reference intervals from 20 years old to 90 years old for AFP and CEA. The reference intervals were: AFP, 1.31-7.89 ng/ml (males) and 1.01-7.10 ng/ml (females); CEA, 0.51-4.86 ng/ml (males) and 0.35-3.45ng/ml (females). AFP and CEA were significantly positively correlated with age in both males (r=0.196 and r=0.198) and females (r=0.121 and r=0.197). CONCLUSIONS Different races or populations and different detection systems may result in different reference intervals for AFP and CEA. Continuous reference intervals of age changes are more accurate than age groups.
Zhang, Gao-Ming; Guo, Xu-Xiao; Ma, Xiao-Bo; Zhang, Guo-Ming
2016-01-01
Background The aim of this study was to calculate 95% reference intervals and double-sided limits of serum alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) according to the CLSI EP28-A3 guideline. Material/Methods Serum AFP and CEA values were measured in samples from 26 000 healthy subjects in the Shuyang area receiving general health checkups. The 95% reference intervals and upper limits were calculated by using MedCalc. Results We provided continuous reference intervals from 20 years old to 90 years old for AFP and CEA. The reference intervals were: AFP, 1.31–7.89 ng/ml (males) and 1.01–7.10 ng/ml (females); CEA, 0.51–4.86 ng/ml (males) and 0.35–3.45ng/ml (females). AFP and CEA were significantly positively correlated with age in both males (r=0.196 and r=0.198) and females (r=0.121 and r=0.197). Conclusions Different races or populations and different detection systems may result in different reference intervals for AFP and CEA. Continuous reference intervals of age changes are more accurate than age groups. PMID:27941709
Estimation of postmortem interval through albumin in CSF by simple dye binding method.
Parmar, Ankita K; Menon, Shobhana K
2015-12-01
Estimation of postmortem interval is a very important question in some medicolegal investigations. For the precise estimation of postmortem interval, there is a need of a method which can give accurate estimation. Bromocresol green (BCG) is a simple dye binding method and widely used in routine practice. Application of this method in forensic practice may bring revolutionary changes. In this study, cerebrospinal fluid was aspirated from cisternal puncture from 100 autopsies. A study was carried out on concentration of albumin with respect to postmortem interval. After death, albumin present in CSF undergoes changes, after 72 h of death, concentration of albumin has become 0.012 mM, and this decrease was linear from 2 h to 72 h. An important relationship was found between albumin concentration and postmortem interval with an error of ± 1-4h. The study concludes that CSF albumin can be a useful and significant parameter in estimation of postmortem interval. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.
Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo
2017-04-01
Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework
Risch, Martin; Nydegger, Urs; Risch, Lorenz
2017-01-01
In clinical practice, laboratory results are often important for making diagnostic, therapeutic, and prognostic decisions. Interpreting individual results relies on accurate reference intervals and decision limits. Despite the considerable amount of resources in clinical medicine spent on elderly patients, accurate reference intervals for the elderly are rarely available. The SENIORLAB study set out to determine reference intervals in the elderly by investigating a large variety of laboratory parameters in clinical chemistry, hematology, and immunology. The SENIORLAB study is an observational, prospective cohort study. Subjectively healthy residents of Switzerland aged 60 years and older were included for baseline examination (n = 1467), where anthropometric measurements were taken, medical history was reviewed, and a fasting blood sample was drawn under optimal preanalytical conditions. More than 110 laboratory parameters were measured, and a biobank was set up. The study participants are followed up every 3 to 5 years for quality of life, morbidity, and mortality. The primary aim is to evaluate different laboratory parameters at age-related reference intervals. The secondary aims of this study include the following: identify associations between different parameters, identify diagnostic characteristics to diagnose different circumstances, identify the prevalence of occult disease in subjectively healthy individuals, and identify the prognostic factors for the investigated outcomes, including mortality. To obtain better grounds to justify clinical decisions, specific reference intervals for laboratory parameters of the elderly are needed. Reference intervals are obtained from healthy individuals. A major obstacle when obtaining reference intervals in the elderly is the definition of health in seniors because individuals without any medical condition and any medication are rare in older adulthood. Reference intervals obtained from such individuals cannot be
The dose delivery effect of the different Beam ON interval in FFF SBRT: TrueBEAM
NASA Astrophysics Data System (ADS)
Tawonwong, T.; Suriyapee, S.; Oonsiri, S.; Sanghangthum, T.; Oonsiri, P.
2016-03-01
The purpose of this study is to determine the dose delivery effect of the different Beam ON interval in Flattening Filter Free Stereotactic Body Radiation Therapy (FFF-SBRT). The three 10MV-FFF SBRT plans (2 half rotating Rapid Arc, 9 to10 Gray/Fraction) were selected and irradiated in three different intervals (100%, 50% and 25%) using the RPM gating system. The plan verification was performed by the ArcCHECK for gamma analysis and the ionization chamber for point dose measurement. The dose delivery time of each interval were observed. For gamma analysis (2%&2mm criteria), the average percent pass of all plans for 100%, 50% and 25% intervals were 86.1±3.3%, 86.0±3.0% and 86.1±3.3%, respectively. For point dose measurement, the average ratios of each interval to the treatment planning were 1.012±0.015, 1.011±0.014 and 1.011±0.013 for 100%, 50% and 25% interval, respectively. The average dose delivery time was increasing from 74.3±5.0 second for 100% interval to 154.3±12.6 and 347.9±20.3 second for 50% and 25% interval, respectively. The same quality of the dose delivery from different Beam ON intervals in FFF-SBRT by TrueBEAM was illustrated. While the 100% interval represents the breath-hold treatment technique, the differences for the free-breathing using RPM gating system can be treated confidently.
Achieving high confidence protein annotations in a sea of unknowns
NASA Astrophysics Data System (ADS)
Timmins-Schiffman, E.; May, D. H.; Noble, W. S.; Nunn, B. L.; Mikan, M.; Harvey, H. R.
2016-02-01
Increased sensitivity of mass spectrometry (MS) technology allows deep and broad insight into community functional analyses. Metaproteomics holds the promise to reveal functional responses of natural microbial communities, whereas metagenomics alone can only hint at potential functions. The complex datasets resulting from ocean MS have the potential to inform diverse realms of the biological, chemical, and physical ocean sciences, yet the extent of bacterial functional diversity and redundancy has not been fully explored. To take advantage of these impressive datasets, we need a clear bioinformatics pipeline for metaproteomics peptide identification and annotation with a database that can provide confident identifications. Researchers must consider whether it is sufficient to leverage the vast quantities of available ocean sequence data or if they must invest in site-specific metagenomic sequencing. We have sequenced, to our knowledge, the first western arctic metagenomes from the Bering Strait and the Chukchi Sea. We have addressed the long standing question: Is a metagenome required to accurately complete metaproteomics and assess the biological distribution of metabolic functions controlling nutrient acquisition in the ocean? Two different protein databases were constructed from 1) a site-specific metagenome and 2) subarctic/arctic groups available in NCBI's non-redundant database. Multiple proteomic search strategies were employed, against each individual database and against both databases combined, to determine the algorithm and approach that yielded the balance of high sensitivity and confident identification. Results yielded over 8200 confidently identified proteins. Our comparison of these results allows us to quantify the utility of investing resources in a metagenome versus using the constantly expanding and immediately available public databases for metaproteomic studies.
[Sources of leader's confidence in organizations].
Ikeda, Hiroshi; Furukawa, Hisataka
2006-04-01
The purpose of this study was to examine the sources of confidence that organization leaders had. As potential sources of the confidence, we focused on fulfillment of expectations made by self and others, reflection on good as well as bad job experiences, and awareness of job experiences in terms of commonality, differentiation, and multiple viewpoints. A questionnaire was administered to 170 managers of Japanese companies. Results were as follows: First, confidence in leaders was more strongly related to fulfillment of expectations made by self and others than reflection on and awareness of job experiences. Second, the confidence was weakly related to internal processing of job experiences, in the form of commonality awareness and reflection on good job experiences. And finally, years of managerial experiences had almost no relation to the confidence. These findings suggested that confidence in leaders was directly acquired from fulfillment of expectations made by self and others, rather than indirectly through internal processing of job experiences. Implications of the findings for leadership training were also discussed.
Confidence-Building Measures in Philippine Security.
1998-05-01
service or government agency. STRATEGY RESEARCH PROJECT i CONFIDENCE-BUILDING MEASURES IN PHILIPPINE SECURITY BY LIEUTENANT COLONEL RAMON G...WAR COLLEGE, CARLISLE BARRACKS, PA 17013-5050 rimo*’^»®*raBl USAWC STRATEGY RESEARCH PROJECT CONFIDENCE-BUILDING MEASURES IN PHILIPPINE...Colonel Ramon Santos, Philippine Army TITLE: Confidence-Building Measures in Philippine Security FORMAT: Strategy Research Project DATE: 1
Interpregnancy Interval and Adverse Pregnancy Outcomes: An Analysis of Successive Pregnancies.
Hanley, Gillian E; Hutcheon, Jennifer A; Kinniburgh, Brooke A; Lee, Lily
2017-03-01
To examine the association between interpregnancy interval and maternal-neonate health when matching women to their successive pregnancies to control for differences in maternal risk factors and compare these results with traditional unmatched designs. We conducted a retrospective cohort study of 38,178 women with three or more deliveries (two or greater interpregnancy intervals) between 2000 and 2015 in British Columbia, Canada. We examined interpregnancy interval (0-5, 6-11, 12-17, 18-23 [reference], 24-59, and 60 months or greater) in relation to neonatal outcomes (preterm birth [less than 37 weeks of gestation], small-for-gestational-age birth [less than the 10th centile], use of neonatal intensive care, low birth weight [less than 2,500 g]) and maternal outcomes (gestational diabetes, beginning the subsequent pregnancy obese [body mass index 30 or greater], and preeclampsia-eclampsia). We used conditional logistic regression to compare interpregnancy intervals within the same mother and unconditional (unmatched) logistic regression to enable comparison with prior research. Analyses using the traditional unmatched design showed significantly increased risks associated with short interpregnancy intervals (eg, there were 232 preterm births [12.8%] in 0-5 months compared with 501 [8.2%] in the 18-23 months reference group; adjusted odds ratio [OR] for preterm birth 1.53, 95% confidence interval [CI] 1.35-1.73). However, these risks were eliminated in within-woman matched analyses (adjusted OR for preterm birth 0.85, 95% CI 0.71-1.02). Matched results indicated that short interpregnancy intervals were significantly associated with increased risk of gestational diabetes (adjusted OR 1.35, 95% CI 1.02-1.80 for 0-5 months) and beginning the subsequent pregnancy obese (adjusted OR 1.61, 95% CI 1.05-2.45 for 0-5 months and adjusted OR 1.43, 95% CI 1.10-1.87 for 6-11 months). Previously reported associations between short interpregnancy intervals and adverse neonatal
NASA Astrophysics Data System (ADS)
Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.
2003-12-01
Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.
On-line confidence monitoring during decision making.
Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas
2018-02-01
Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.
Binary Interval Search: a scalable algorithm for counting interval intersections.
Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R
2013-01-01
The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.
Pediatric reference intervals for random urine calcium, phosphorus and total protein.
Slev, Patricia R; Bunker, Ashley M; Owen, William E; Roberts, William L
2010-09-01
The aim of this study was to establish age appropriate reference intervals for calcium (Ca), phosphorus (P) and total protein (UTP) in random urine samples. All analytes were measured using the Roche MODULAR P analyzer and normalized to creatinine (Cr). Our study cohort consisted of 674 boys and 728 girls between 7 and 17 years old (y.o.), which allowed us to determine the central 95% reference intervals with 90% confidence intervals by non-parametric analysis partitioned by both gender and 2-year age intervals for each analyte [i.e. boys in age group 7-9 years (7-9 boys); girls in age group 7-9 years (7-9 girls), etc.]. Results for the upper limits of the central 95% reference interval were: for Ca/Cr, 0.27 (16,17 y.o.) to 0.46 mg/mg (7-9 y.o.) for the girls and 0.26 (16,17 y.o.) to 0.43 mg/mg (7-9 y.o.) for the boys; for P/Cr, 0.85 (16,17 y.o.) to 1.44 mg/mg (7-9 y.o.) for the girls and 0.87 (16,17 y.o.) to 1.68 mg/mg (7-9 y.o.) for the boys; for UTP/Cr, 0.30 (7-9 y.o.) to 0.34 mg/mg (10-12 y.o.) for the girls and 0.19 (16,17, y.o.) to 0.26 mg/mg (13-15 y.o.) for the boys. Upper reference limits decreased with increasing age, and age was a statistically significant variable for all analytes. Eight separate age- and gender-specific reference intervals are proposed per analyte.
NASA Technical Reports Server (NTRS)
Sadeh, D.; Shannon, D. C.; Abboud, S.; Akselrod, S.; Cohen, R. J.
1987-01-01
The ability of the autonomic nervous system to alter the QT interval in response to heart rate changes is essential to cardiovascular control. An accurate way to determine the relation between QT intervals and their corresponding RR intervals is described. A computer algorithm measures the RR intervals using digital filtering and cross-correlating the QRS sections of consecutive waveforms. The QT intervals is calculated by choosing a section of, the ECG that includes the T wave and cross-correlating it with all the consecutive T waves. At least 4000 pairs of QT-RR intervals are computed for each subject and a best fit correlation function determines the relations between the QT and RR intervals. This technique enables to establish a precise correlation between RR and QT in order to distinguish between control and SIDS babies.
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Sample size, confidence, and contingency judgement.
Clément, Mélanie; Mercier, Pierre; Pastò, Luigi
2002-06-01
According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.
ERIC Educational Resources Information Center
Warren, Richard Daniel
2012-01-01
The purpose of this research was to investigate the effects of including adaptive confidence strategies in instructionally sound computer-assisted instruction (CAI) on learning and learner confidence. Seventy-one general educational development (GED) learners recruited from various GED learning centers at community colleges in the southeast United…
Is fear perception special? Evidence at the level of decision-making and subjective confidence.
Koizumi, Ai; Mobbs, Dean; Lau, Hakwan
2016-11-01
Fearful faces are believed to be prioritized in visual perception. However, it is unclear whether the processing of low-level facial features alone can facilitate such prioritization or whether higher-level mechanisms also contribute. We examined potential biases for fearful face perception at the levels of perceptual decision-making and perceptual confidence. We controlled for lower-level visual processing capacity by titrating luminance contrasts of backward masks, and the emotional intensity of fearful, angry and happy faces. Under these conditions, participants showed liberal biases in perceiving a fearful face, in both detection and discrimination tasks. This effect was stronger among individuals with reduced density in dorsolateral prefrontal cortex, a region linked to perceptual decision-making. Moreover, participants reported higher confidence when they accurately perceived a fearful face, suggesting that fearful faces may have privileged access to consciousness. Together, the results suggest that mechanisms in the prefrontal cortex contribute to making fearful face perception special. © The Author (2016). Published by Oxford University Press.
Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.
2006-01-01
In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467
Binary Interval Search: a scalable algorithm for counting interval intersections
Layer, Ryan M.; Skadron, Kevin; Robins, Gabriel; Hall, Ira M.; Quinlan, Aaron R.
2013-01-01
Motivation: The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. Availability: https://github.com/arq5x/bits. Contact: arq5x@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23129298
Glaser, Michelle S.; Webber, Mayris P.; Zeig-Owens, Rachel; Weakley, Jessica; Liu, Xiaoxue; Ye, Fen; Cohen, Hillel W.; Aldrich, Thomas K.; Kelly, Kerry J.; Nolan, Anna; Weiden, Michael D.; Prezant, David J.; Hall, Charles B.
2014-01-01
Respiratory disorders are associated with occupational and environmental exposures. The latency period between exposure and disease onset remains uncertain. The World Trade Center (WTC) disaster presents a unique opportunity to describe the latency period for obstructive airway disease (OAD) diagnoses. This prospective cohort study of New York City firefighters compared the timing and incidence of physician-diagnosed OAD relative to WTC exposure. Exposure was categorized by WTC arrival time as high (on the morning of September 11, 2001), moderate (after noon on September 11, 2001, or on September 12, 2001), or low (during September 13–24, 2001). We modeled relative rates and 95% confidence intervals of OAD incidence by exposure over the first 5 years after September 11, 2001, estimating the times of change in the relative rate with change point models. We observed a change point at 15 months after September 11, 2001. Before 15 months, the relative rate for the high- versus low-exposure group was 3.96 (95% confidence interval: 2.51, 6.26) and thereafter, it was 1.76 (95% confidence interval: 1.26, 2.46). Incident OAD was associated with WTC exposure for at least 5 years after September 11, 2001. There were higher rates of new-onset OAD among the high-exposure group during the first 15 months and, to a lesser extent, throughout follow-up. This difference in relative rate by exposure occurred despite full and free access to health care for all WTC-exposed firefighters, demonstrating the persistence of WTC-associated OAD risk. PMID:24980522
Confidant Relations of the Aged.
ERIC Educational Resources Information Center
Tigges, Leann M.; And Others
The confidant relationship is a qualitatively distinct dimension of the emotional support system of the aged, yet the composition of the confidant network has been largely neglected in research on aging. Persons (N=940) 60 years of age and older were interviewed about their socio-environmental setting. From the enumeration of their relatives,…
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
He, Qili; Su, Guoming; Liu, Keliang; Zhang, Fangcheng; Jiang, Yong; Gao, Jun; Liu, Lida; Jiang, Zhongren; Jin, Minwu; Xie, Huiping
2017-01-01
Hematologic and biochemical analytes of Sprague-Dawley rats are commonly used to determine effects that were induced by treatment and to evaluate organ dysfunction in toxicological safety assessments, but reference intervals have not been well established for these analytes. Reference intervals as presently defined for these analytes in Sprague-Dawley rats have not used internationally recommended statistical method nor stratified by sex. Thus, we aimed to establish sex-specific reference intervals for hematologic and biochemical parameters in Sprague-Dawley rats according to Clinical and Laboratory Standards Institute C28-A3 and American Society for Veterinary Clinical Pathology guideline. Hematology and biochemistry blood samples were collected from 500 healthy Sprague-Dawley rats (250 males and 250 females) in the control groups. We measured 24 hematologic analytes with the Sysmex XT-2100i analyzer, 9 biochemical analytes with the Olympus AU400 analyzer. We then determined statistically relevant sex partitions and calculated reference intervals, including corresponding 90% confidence intervals, using nonparametric rank percentile method. We observed that most hematologic and biochemical analytes of Sprague-Dawley rats were significantly influenced by sex. Males had higher hemoglobin, hematocrit, red blood cell count, red cell distribution width, mean corpuscular volume, mean corpuscular hemoglobin, white blood cell count, neutrophils, lymphocytes, monocytes, percentage of neutrophils, percentage of monocytes, alanine aminotransferase, aspartate aminotransferase, and triglycerides compared to females. Females had higher mean corpuscular hemoglobin concentration, plateletcrit, platelet count, eosinophils, percentage of lymphocytes, percentage of eosinophils, creatinine, glucose, total cholesterol and urea compared to males. Sex partition was required for most hematologic and biochemical analytes in Sprague-Dawley rats. We established sex-specific reference
Moulki, Naeem; Kealhofer, Jessica V; Benditt, David G; Gravely, Amy; Vakil, Kairav; Garcia, Santiago; Adabag, Selcuk
2018-06-16
Bifascicular block and prolonged PR interval on the electrocardiogram (ECG) have been associated with complete heart block and sudden cardiac death. We sought to determine if cardiac implantable electronic devices (CIED) improve survival in these patients. We assessed survival in relation to CIED status among 636 consecutive patients with bifascicular block and prolonged PR interval on the ECG. In survival analyses, CIED was considered as a time-varying covariate. Average age was 76 ± 9 years, and 99% of the patients were men. A total of 167 (26%) underwent CIED (127 pacemaker only) implantation at baseline (n = 23) or during follow-up (n = 144). During 5.4 ± 3.8 years of follow-up, 83 (13%) patients developed complete or high-degree atrioventricular block and 375 (59%) died. Patients with a CIED had a longer survival compared to those without a CIED in the traditional, static analysis (log-rank p < 0.0001) but not when CIED was considered as a time-varying covariate (log-rank p = 0.76). In the multivariable model, patients with a CIED had a 34% lower risk of death (hazard ratio 0.66, 95% confidence interval 0.52-0.83; p = 0.001) than those without CIED in the traditional analysis but not in the time-varying covariate analysis (hazard ratio 1.05, 95% confidence interval 0.79-1.38; p = 0.76). Results did not change in the subgroup with a pacemaker only. Bifascicular block and prolonged PR interval on ECG are associated with a high incidence of complete atrioventricular block and mortality. However, CIED implantation does not have a significant influence on survival when time-varying nature of CIED implantation is considered.
The 2012 Retirement Confidence Survey: job insecurity, debt weigh on retirement confidence, savings.
Helman, Ruth; Copeland, Craig; VanDerhei, Jack
2012-03-01
Americans' confidence in their ability to retire comfortably is stagnant at historically low levels. Just 14 percent are very confident they will have enough money to live comfortably in retirement (statistically equivalent to the low of 13 percent measured in 2011 and 2009). Employment insecurity looms large: Forty-two percent identify job uncertainty as the most pressing financial issue facing most Americans today. Worker confidence about having enough money to pay for medical expenses and long-term care expenses in retirement remains well below their confidence levels for paying basic expenses. Many workers report they have virtually no savings and investments. In total, 60 percent of workers report that the total value of their household's savings and investments, excluding the value of their primary home and any defined benefit plans, is less than $25,000. Twenty-five percent of workers in the 2012 Retirement Confidence Survey say the age at which they expect to retire has changed in the past year. In 1991, 11 percent of workers said they expected to retire after age 65, and by 2012 that has grown to 37 percent. Regardless of those retirement age expectations, and consistent with prior RCS findings, half of current retirees surveyed say they left the work force unexpectedly due to health problems, disability, or changes at their employer, such as downsizing or closure. Those already in retirement tend to express higher levels of confidence than current workers about several key financial aspects of retirement. Retirees report they are significantly more reliant on Social Security as a major source of their retirement income than current workers expect to be. Although 56 percent of workers expect to receive benefits from a defined benefit plan in retirement, only 33 percent report that they and/or their spouse currently have such a benefit with a current or previous employer. More than half of workers (56 percent) report they and/or their spouse have not tried
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
Hedne, Mikael R; Norman, Elisabeth; Metcalfe, Janet
2016-01-01
The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the "warmth ratings" previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.
Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet
2016-01-01
The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598
Confidence Leak in Perceptual Decision Making.
Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan
2015-11-01
People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.
Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.
Ingul, Charlotte B; Dias, Katrin A; Tjonna, Arnt E; Follestad, Turid; Hosseini, Mansoureh S; Timilsina, Anita S; Hollekim-Strand, Siri M; Ro, Torstein B; Davies, Peter S W; Cain, Peter A; Leong, Gary M; Coombes, Jeff S
2018-02-13
High intensity interval training (HIIT) confers superior cardiovascular health benefits to moderate intensity continuous training (MICT) in adults and may be efficacious for improving diminished cardiac function in obese children. The aim of this study was to compare the effects of HIIT, MICT and nutrition advice interventions on resting left ventricular (LV) peak systolic tissue velocity (S') in obese children. Ninety-nine obese children were randomised into one of three 12-week interventions, 1) HIIT [n = 33, 4 × 4 min bouts at 85-95% maximum heart rate (HR max ), 3 times/week] and nutrition advice, 2) MICT [n = 32, 44 min at 60-70% HR max , 3 times/week] and nutrition advice, and 3) nutrition advice only (nutrition) [n = 34]. Twelve weeks of HIIT and MICT were equally efficacious, but superior to nutrition, for normalising resting LV S' in children with obesity (estimated mean difference 1.0 cm/s, 95% confidence interval 0.5 to 1.6 cm/s, P < 0.001; estimated mean difference 0.7 cm/s, 95% confidence interval 0.2 to 1.3 cm/s, P = 0.010, respectively). Twelve weeks of HIIT and MICT were superior to nutrition advice only for improving resting LV systolic function in obese children. Copyright © 2017 Elsevier Inc. All rights reserved.
Impact of Increasing Inter-pregnancy Interval on Maternal and Infant Health
Wendt, Amanda; Gibbs, Cassandra M.; Peters, Stacey; Hogue, Carol J.
2015-01-01
Short inter-pregnancy intervals (IPIs) have been associated with adverse maternal and infant health outcomes in the literature. However, many studies in this area have been lacking in quality and appropriate control for confounders known to be associated with both short IPIs and poor outcomes. The objective of this systematic review was to assess this relationship using more rigorous criteria, based on GRADE (Grading of Recommendations Assessment, Development and Evaluation) methodology. We found too few higher-quality studies of the impact of IPIs (measured as the time between the birth of a previous child and conception of the next child) on maternal health to reach conclusions about maternal nutrition, morbidity or mortality. However, the evidence for infant effects justified meta-analyses. We found significant impacts of short IPIs for extreme preterm birth [<6 m adjusted odds ratio (aOR): 1.58 [95% confidence interval (CI) 1.40, 1.78], 6–11 m aOR: 1.23 [1.03, 1.46
Targeting Low Career Confidence Using the Career Planning Confidence Scale
ERIC Educational Resources Information Center
McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven
2006-01-01
The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…
Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J
2015-09-01
There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.
Perin, Jamie; Walker, Neff
2015-01-01
intervals less than 18 months, standard regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18–2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52–1.63), a decline of almost one-third in the effect on neonatal mortality. Conclusions Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality. PMID:26562139
Perin, Jamie; Walker, Neff
2015-01-01
regression adjustment for confounding factors estimated a risk ratio for neonatal mortality of 2.28 (95% confidence interval: 2.18-2.37). This same effect estimated within mother is 1.57 (95% confidence interval: 1.52-1.63), a decline of almost one-third in the effect on neonatal mortality. Neonatal, infant, and child mortality are strongly and significantly related to preceding birth interval, where births within a short interval of time after the previous birth have increased mortality. Previous analyses have demonstrated this relationship on average across all births; however, women who have short spaces between births are different from women with long spaces. Among women 35 years and older where a comparison of birth spaces within mother is possible, we find a much reduced although still significant effect of short birth spaces on child mortality.
ERIC Educational Resources Information Center
Zakszeski, Brittany N.; Hojnoski, Robin L.; Wood, Brenna K.
2017-01-01
Classroom engagement is important to young children's academic and social development. Accurate methods of capturing this behavior are needed to inform and evaluate intervention efforts. This study compared the accuracy of interval durations (i.e., 5 s, 10 s, 15 s, 20 s, 30 s, and 60 s) of momentary time sampling (MTS) in approximating the…
Specific Immunoglobulin (Ig) G Reference Intervals for Common Food, Insect, and Mold Allergens.
Martins, Thomas B; Bandhauer, Michael E; Wilcock, Diane M; Hill, Harry R; Slev, Patricia R
2016-12-01
The clinical utility of serum IgG measurement in the diagnosis of allergy and food-induced hypersensitivity has been largely discredited. Recent studies, however, have shown that specific IgG can inhibit IgE mediated allergies, and may play a role in allergen specific desensitization. Accurate reference intervals for IgG specific allergens have not been widely established and are needed for better interpretation of serum antibody concentrations. In this study we established 64 IgG reference intervals for 48 common food allergens, 5 venoms, and 11 molds. Specific IgG concentrations were determined employing an automated fluorescent enzyme immunoassay on serum samples from 130 normal adults (65 males and 65 females), age range 18-69 y, mean 37.3 y. The lower reference interval limit for all allergens tested (n=64) was <2 mcg/mL. The median upper reference interval value for all 64 allergens was 12.9 mcg/mL, with Tuna (f40) having the lowest upper interval limit at 3.8 mcg/mL, and the mold Setomelanomma rostrate (m8) demonstrating the highest upper interval limit at 131 mcg/L. The considerable variation observed among the upper reference interval limits emphasizes the need for the establishment of allergen specific ranges for IgG. These newly established ranges should be a useful aid for clinicians in the interpretation of laboratory serum IgG results. © 2016 by the Association of Clinical Scientists, Inc.
Optimal Wind Power Uncertainty Intervals for Electricity Market Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ying; Zhou, Zhi; Botterud, Audun
It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less
Helman, Ruth; Copeland, Craig; VanDerhei, Jack
2009-04-01
RECORD LOW CONFIDENCE LEVELS: Workers who say they are very confident about having enough money for a comfortable retirement this year hit the lowest level in 2009 (13 percent) since the Retirement Confidence Survey started asking the question in 1993, continuing a two-year decline. Retirees also posted a new low in confidence about having a financially secure retirement, with only 20 percent now saying they are very confident (down from 41 percent in 2007). THE ECONOMY, INFLATION, COST OF LIVING ARE THE BIG CONCERNS: Not surprisingly, workers overall who have lost confidence over the past year about affording a comfortable retirement most often cite the recent economic uncertainty, inflation, and the cost of living as primary factors. In addition, certain negative experiences, such as job loss or a pay cut, loss of retirement savings, or an increase in debt, almost always contribute to loss of confidence among those who experience them. RETIREMENT EXPECTATIONS DELAYED: Workers apparently expect to work longer because of the economic downturn: 28 percent of workers in the 2009 RCS say the age at which they expect to retire has changed in the past year. Of those, the vast majority (89 percent) say that they have postponed retirement with the intention of increasing their financial security. Nevertheless, the median (mid-point) worker expects to retire at age 65, with 21 percent planning to push on into their 70s. The median retiree actually retired at age 62, and 47 percent of retirees say they retired sooner than planned. WORKING IN RETIREMENT: More workers are also planning to supplement their income in retirement by working for pay. The percentage of workers planning to work after they retire has increased to 72 percent in 2009 (up from 66 percent in 2007). This compares with 34 percent of retirees who report they actually worked for pay at some time during their retirement. GREATER WORRY ABOUT BASIC AND HEALTH EXPENSES: Workers who say they very confident in
Albarracín, Dolores; Mitchell, Amy L.
2016-01-01
This series of studies identified individuals who chronically believe that they can successfully defend their attitudes from external attack and investigated the consequences of this individual difference for selective exposure to attitude-incongruent information and, ultimately, attitude change. Studies 1 and 2 validated a measure of defensive confidence as an individual difference that is unidimensional, distinct from other personality measures, reliable over a 2-week interval, and organized as a trait that generalizes across various personal and social issues. Studies 3 and 4 provided evidence that defensive confidence decreases preference for proattitudinal information, therefore inducing greater reception of counterattitudinal materials. Study 5 demonstrated that people who are high in defensive confidence are more likely to change their attitudes as a result of exposure to counterattitudinal information and examined the perceptions that mediate this important phenomenon. PMID:15536240
2014-01-01
Background Establishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears (39 males and 49 females) in Sweden. The animals were chemically immobilised by darting from a helicopter with a combination of medetomidine, tiletamine and zolazepam in April and May 2006–2012 in the county of Dalarna, Sweden. Venous blood samples were collected during anaesthesia for radio collaring and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown bears in Sweden. Results The following variables were not affected by host characteristics: red blood cell, white blood cell, monocyte and platelet count, alanine transaminase, amylase, bilirubin, free fatty acids, glucose, calcium, chloride, potassium, and cortisol. Age differences were seen for the majority of the haematological variables, whereas sex influenced only mean corpuscular haemoglobin concentration, aspartate aminotransferase, lipase, lactate dehydrogenase, β-globulin, bile acids, triglycerides and sodium. Conclusions The biochemical and haematological reference intervals provided and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears. PMID:25139149
Engineering Student Self-Assessment through Confidence-Based Scoring
ERIC Educational Resources Information Center
Yuen-Reed, Gigi; Reed, Kyle B.
2015-01-01
A vital aspect of an answer is the confidence that goes along with it. Misstating the level of confidence one has in the answer can have devastating outcomes. However, confidence assessment is rarely emphasized during typical engineering education. The confidence-based scoring method described in this study encourages students to both think about…
Nouretdinov, Ilia; Costafreda, Sergi G; Gammerman, Alexander; Chervonenkis, Alexey; Vovk, Vladimir; Vapnik, Vladimir; Fu, Cynthia H Y
2011-05-15
There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction. Copyright © 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Goedeme, Tim
2013-01-01
If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…
Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow
2015-11-01
The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Worse than enemies. The CEO's destructive confidant.
Sulkowicz, Kerry J
2004-02-01
The CEO is often the most isolated and protected employee in the organization. Few leaders, even veteran CEOs, can do the job without talking to someone about their experiences, which is why most develop a close relationship with a trusted colleague, a confidant to whom they can tell their thoughts and fears. In his work with leaders, the author has found that many CEO-confidant relationships function very well. The confidants keep their leaders' best interests at heart. They derive their gratification vicariously, through the help they provide rather than through any personal gain, and they are usually quite aware that a person in their position can potentially abuse access to the CEO's innermost secrets. Unfortunately, almost as many confidants will end up hurting, undermining, or otherwise exploiting CEOs when the executives are at their most vulnerable. These confidants rarely make the headlines, but behind the scenes they do enormous damage to the CEO and to the organization as a whole. What's more, the leader is often the last one to know when or how the confidant relationship became toxic. The author has identified three types of destructive confidants. The reflector mirrors the CEO, constantly reassuring him that he is the "fairest CEO of them all." The insulator buffers the CEO from the organization, preventing critical information from getting in or out. And the usurper cunningly ingratiates himself with the CEO in a desperate bid for power. This article explores how the CEO-confidant relationship plays out with each type of adviser and suggests ways CEOs can avoid these destructive relationships.
Alves, Gelio; Yu, Yi-Kuo
2016-09-01
There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
How to Fire a President: Voting "No Confidence" with Confidence
ERIC Educational Resources Information Center
Schmidt, Peter
2009-01-01
College faculties often use votes of "no confidence" to try to push out the leaders of their institutions. Many do so, however, without giving much thought to what such a vote actually means, whether they are using it appropriately, or how it will affect their campus--and their own future. Mae Kuykendall, a professor of law at Michigan State…
NASA Astrophysics Data System (ADS)
Matsakis, Nicholas D.; Gross, Thomas R.
Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.
Confidence as Bayesian Probability: From Neural Origins to Behavior.
Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F
2015-10-07
Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.
Reiter, Paul L.; Magnus, Brooke E.; McRee, Annie-Laurie; Dempsey, Amanda F.; Brewer, Noel T.
2015-01-01
Objective To support efforts to address vaccine hesitancy, we sought to validate a brief measure of vaccination confidence using a large, nationally representative sample of parents. Methods We analyzed weighted data from 9,018 parents who completed the 2010 National Immunization Survey-Teen, an annual, population-based telephone survey. Parents reported on the immunization history of a 13- to 17-year-old child in their households for vaccines including tetanus, diphtheria, and acellular pertussis (Tdap), meningococcal, and human papillomavirus (HPV) vaccines. For each vaccine, separate logistic regression models assessed associations between parents’ mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. We repeated analyses for the scale’s 4-item short form. Results One quarter of parents (24%) reported refusal of any vaccine, with refusal of specific vaccines ranging from 21% for HPV to 2% for Tdap. Using the full 8-item scale, vaccination confidence was negatively associated with measures of vaccine refusal and positively associated with measures of vaccination status. For example, refusal of any vaccine was more common among parents whose scale scores were medium (odds ratio [OR] = 2.08, 95% confidence interval [CI], 1.75–2.47) or low (OR = 4.61, 95% CI, 3.51–6.05) versus high. For the 4-item short form, scores were also consistently associated with vaccine refusal and vaccination status. Vaccination confidence was inconsistently associated with vaccine delay. Conclusions The Vaccination Confidence Scale shows promise as a tool for identifying parents at risk for refusing adolescent vaccines. The scale’s short form appears to offer comparable performance. PMID:26300368
Wu, Hongjun; Wang, Bingjian; Zhu, Xinpu; Chu, Guang; Zhang, Zhi
2016-08-01
The widely used oscillometric automated blood pressure (BP) monitor was continuously questioned on its accuracy. A novel BP kit named Accutension which adopted Korotkoff auscultation method was then devised. Accutension worked with a miniature microphone, a pressure sensor, and a smartphone. The BP values were automatically displayed on the smartphone screen through the installed App. Data recorded in the phone could be played back and reconfirmed after measurement. They could also be uploaded and saved to the iCloud. The accuracy and consistency of this novel electronic auscultatory sphygmomanometer was preliminarily verified here. Thirty-two subjects were included and 82 qualified readings were obtained. The mean differences ± SD for systolic and diastolic BP readings between Accutension and mercury sphygmomanometer were 0.87 ± 2.86 and -0.94 ± 2.93 mm Hg. Agreements between Accutension and mercury sphygmomanometer were highly significant for systolic (ICC = 0.993, 95% confidence interval (CI): 0.989-0.995) and diastolic (ICC = 0.987, 95% CI: 0.979-0.991). In conclusion, Accutension worked accurately based on our pilot study data. The difference was acceptable. ICC and Bland-Altman plot charts showed good agreements with manual measurements. Systolic readings of Accutension were slightly higher than those of manual measurement, while diastolic readings were slightly lower. One possible reason was that Accutension captured the first and the last korotkoff sound more sensitively than human ear during manual measurement and avoided sound missing, so that it might be more accurate than traditional mercury sphygmomanometer. By documenting and analyzing of variant tendency of BP values, Accutension helps management of hypertension and therefore contributes to the mobile heath service.
Confidence in outcome estimates from systematic reviews used in informed consent.
Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette
2016-12-01
Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Pelletier, Eric; Daigle, Jean-Marc; Defay, Fannie; Major, Diane; Guertin, Marie-Hélène; Brisson, Jacques
2016-11-01
After imaging assessment of an abnormal screening mammogram, a follow-up examination 6 months later is recommended to some women. Our aim was to identify which characteristics of lesions, women, and physicians are associated to such short-interval follow-up recommendation in the Quebec Breast Cancer Screening Program. Between 1998 and 2008, 1,839,396 screening mammograms were performed and a total of 114,781 abnormal screens were assessed by imaging only. Multivariate analysis was done with multilevel Poisson regression models with robust variance and generalized linear mixed models. A short-interval follow-up was recommended in 26.7% of assessments with imaging only, representing 2.3% of all screens. Case-mix adjusted proportion of short-interval follow-up recommendations varied substantially across physicians (range: 4%-64%). Radiologists with high recall rates (≥15%) had a high proportion of short-interval follow-up recommendation (risk ratio: 1.82; 95% confidence interval: 1.35-2.45) compared to radiologists with low recall rates (<5%). The adjusted proportion of short-interval follow-up was high (22.8%) even when a previous mammogram was usually available. Short-interval follow-up recommendation at assessment is frequent in this Canadian screening program, even when a previous mammogram is available. Characteristics related to radiologists appear to be key determinants of short-interval follow-up recommendation, rather than characteristics of lesions or patient mix. Given that it can cause anxiety to women and adds pressure on the health system, it appears important to record and report short-interval follow-up and to identify ways to reduce its frequency. Short-interval follow-up recommendations should be considered when assessing the burden of mammography screening. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Modified Dempster-Shafer approach using an expected utility interval decision rule
NASA Astrophysics Data System (ADS)
Cheaito, Ali; Lecours, Michael; Bosse, Eloi
1999-03-01
The combination operation of the conventional Dempster- Shafer algorithm has a tendency to increase exponentially the number of propositions involved in bodies of evidence by creating new ones. The aim of this paper is to explore a 'modified Dempster-Shafer' approach of fusing identity declarations emanating form different sources which include a number of radars, IFF and ESM systems in order to limit the explosion of the number of propositions. We use a non-ad hoc decision rule based on the expected utility interval to select the most probable object in a comprehensive Platform Data Base containing all the possible identity values that a potential target may take. We study the effect of the redistribution of the confidence levels of the eliminated propositions which otherwise overload the real-time data fusion system; these eliminated confidence levels can in particular be assigned to ignorance, or uniformly added to the remaining propositions and to ignorance. A scenario has been selected to demonstrate the performance of our modified Dempster-Shafer method of evidential reasoning.
Parameter identification for structural dynamics based on interval analysis algorithm
NASA Astrophysics Data System (ADS)
Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke
2018-04-01
A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.
An absolute interval scale of order for point patterns
Protonotarios, Emmanouil D.; Baum, Buzz; Johnston, Alan; Hunter, Ginger L.; Griffin, Lewis D.
2014-01-01
Human observers readily make judgements about the degree of order in planar arrangements of points (point patterns). Here, based on pairwise ranking of 20 point patterns by degree of order, we have been able to show that judgements of order are highly consistent across individuals and the dimension of order has an interval scale structure spanning roughly 10 just-notable-differences (jnd) between disorder and order. We describe a geometric algorithm that estimates order to an accuracy of half a jnd by quantifying the variability of the size and shape of spaces between points. The algorithm is 70% more accurate than the best available measures. By anchoring the output of the algorithm so that Poisson point processes score on average 0, perfect lattices score 10 and unit steps correspond closely to jnds, we construct an absolute interval scale of order. We demonstrate its utility in biology by using this scale to quantify order during the development of the pattern of bristles on the dorsal thorax of the fruit fly. PMID:25079866
Communication confidence in persons with aphasia.
Babbitt, Edna M; Cherney, Leora R
2010-01-01
Communication confidence is a construct that has not been explored in the aphasia literature. Recently, national and international organizations have endorsed broader assessment methods that address quality of life and include participation, activity, and impairment domains as well as psychosocial areas. Individuals with aphasia encounter difficulties in all these areas on a daily basis in living with a communication disorder. Improvements are often reflected in narratives that are not typically included in standard assessments. This article illustrates how a new instrument measuring communication confidence might fit into a broad assessment framework and discusses the interaction of communication confidence, autonomy, and self-determination for individuals living with aphasia.
Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A
1994-01-01
In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.
Corrected Confidence Bands for Functional Data Using Principal Components
Goldsmith, J.; Greven, S.; Crainiceanu, C.
2014-01-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003
Ward, Darren F.; Anderson, Dean P.; Barron, Mandy C.
2016-01-01
Effective detection plays an important role in the surveillance and management of invasive species. Invasive ants are very difficult to eradicate and are prone to imperfect detection because of their small size and cryptic nature. Here we demonstrate the use of spatially explicit surveillance models to estimate the probability that Argentine ants (Linepithema humile) have been eradicated from an offshore island site, given their absence across four surveys and three surveillance methods, conducted since ant control was applied. The probability of eradication increased sharply as each survey was conducted. Using all surveys and surveillance methods combined, the overall median probability of eradication of Argentine ants was 0.96. There was a high level of confidence in this result, with a high Credible Interval Value of 0.87. Our results demonstrate the value of spatially explicit surveillance models for the likelihood of eradication of Argentine ants. We argue that such models are vital to give confidence in eradication programs, especially from highly valued conservation areas such as offshore islands. PMID:27721491
Confidence and Competence with Mathematical Procedures
ERIC Educational Resources Information Center
Foster, Colin
2016-01-01
Confidence assessment (CA), in which students state alongside each of their answers a confidence level expressing how certain they are, has been employed successfully within higher education. However, it has not been widely explored with school pupils. This study examined how school mathematics pupils (N?=?345) in five different secondary schools…
"Yes, we can!" review on team confidence in sports.
Fransen, Katrien; Mertens, Niels; Feltz, Deborah; Boen, Filip
2017-08-01
During the last decade, team confidence has received more and more attention in the sport psychology literature. Research has demonstrated that athletes who are more confident in their team's abilities exert more effort, set more challenging goals, are more resilient when facing adversities, and ultimately perform better. This article reviews the existing literature in order to provide more clarity in terms of the conceptualization and the operationalization of team confidence. We thereby distinguish between collective efficacy (i.e., process-oriented team confidence) and team outcome confidence (i.e., outcome-oriented team confidence). In addition, both the sources as well as the outcomes of team confidence will be discussed. Furthermore, we will go deeper into the dispersion of team confidence and we will evaluate the current guidelines on how to measure both types of team confidence. Building upon this base, the article then highlights interesting avenues for future research in order to further improve both our theoretical knowledge on team confidence and its application to the field. Copyright © 2017 Elsevier Ltd. All rights reserved.
Daerga, Laila; Sjölander, Per; Jacobsson, Lars; Edin-Liljegren, Anette
2012-08-01
To investigate the confidence in primary health care, psychiatry and social services among the reindeer-herding Sami and the non-Sami population of northern Sweden. A semi-randomized, cross-sectional study design comprising 325 reindeer-herding Sami (171 men, 154 women) and a control population of 1,437 non-Sami (684 men, 753 women). A questionnaire on the confidence in primary health care, psychiatry, social services, and work colleagues was distributed to members of reindeer-herding families through the Sami communities and to the control population through the post. The relative risk for poor confidence was analyzed by calculating odds ratios with 95% confidence intervals adjusted for age and level of education. The confidence in primary health care and psychiatry was significantly lower among the reindeer-herding Sami compared with the control group. No differences were found between men and women in the reindeer-herding Sami population. In both the reindeer-herding Sami and the control population, younger people (≤ 48 years) reported significantly lower confidence in primary health care than older individuals (>48 years). A conceivable reason for the poor confidence in health care organizations reported by the reindeer-herding Sami is that they experience health care staff as poorly informed about reindeer husbandry and Sami culture, resulting in unsuitable or unrealistic treatment suggestions. The findings suggest that the poor confidence constitutes a significant obstacle of the reindeer-herding Sami to fully benefit from public health care services.
González-García, Nadia; González, Martha A; Rendón, Pablo L
2016-07-15
Relationships between musical pitches are described as either consonant, when associated with a pleasant and harmonious sensation, or dissonant, when associated with an inharmonious feeling. The accurate singing of musical intervals requires communication between auditory feedback processing and vocal motor control (i.e. audio-vocal integration) to ensure that each note is produced correctly. The objective of this study is to investigate the neural mechanisms through which trained musicians produce consonant and dissonant intervals. We utilized 4 musical intervals (specifically, an octave, a major seventh, a fifth, and a tritone) as the main stimuli for auditory discrimination testing, and we used the same interval tasks to assess vocal accuracy in a group of musicians (11 subjects, all female vocal students at conservatory level). The intervals were chosen so as to test for differences in recognition and production of consonant and dissonant intervals, as well as narrow and wide intervals. The subjects were studied using fMRI during performance of the interval tasks; the control condition consisted of passive listening. Singing dissonant intervals as opposed to singing consonant intervals led to an increase in activation in several regions, most notably the primary auditory cortex, the primary somatosensory cortex, the amygdala, the left putamen, and the right insula. Singing wide intervals as opposed to singing narrow intervals resulted in the activation of the right anterior insula. Moreover, we also observed a correlation between singing in tune and brain activity in the premotor cortex, and a positive correlation between training and activation of primary somatosensory cortex, primary motor cortex, and premotor cortex during singing. When singing dissonant intervals, a higher degree of training correlated with the right thalamus and the left putamen. Our results indicate that singing dissonant intervals requires greater involvement of neural mechanisms
ERIC Educational Resources Information Center
President's Council on Physical Fitness and Sports, Washington, DC.
Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.
2013-08-01
To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results aboutmore » interval graphs to intervals over posets.« less
Baxter, Suzanne D.; Hitchcock, David B.; Royer, Julie A.; Smith, Albert F.; Guinn, Caroline H.
2017-01-01
We examined reporting accuracy by meal component (beverage, bread, breakfast meat, combination entrée, condiment, dessert, entrée, fruit, vegetable) with validation-study data on 455 fourth-grade children (mean age = 9.92 ± 0.41 years) observed eating school meals and randomized to one of eight dietary recall conditions (two retention intervals [short, long] crossed with four prompts [forward, meal-name, open, reverse]). Accuracy category (match [observed and reported], omission [observed but unreported], intrusion [unobserved but reported]) was a polytomous nominal item response variable. We fit a multilevel cumulative logit model with item variables meal component and serving period (breakfast, lunch) and child variables retention interval, prompt and sex. Significant accuracy category predictors were meal component (p < 0.0003), retention interval (p < 0.0003), meal-component × serving-period (p < 0.0003) and meal-component × retention-interval (p = 0.001). The relationship of meal component and accuracy category was much stronger for lunch than breakfast. For lunch, beverages were matches more often, omissions much less often and intrusions more often than expected under independence; fruits and desserts were omissions more often. For the meal-component × retention-interval interaction, for the short retention interval, beverages were intrusions much more often but combination entrées and condiments were intrusions less often; for the long retention interval, beverages were matches more often and omissions less often but fruits were matches less often. Accuracy for each meal component appeared better with the short than long retention interval. For lunch and for the short retention interval, children’s reporting was most accurate for entrée and combination entrée meal components, whereas it was least accurate for vegetable and fruit meal components. Results have implications for conclusions of studies and interventions assessed with dietary recalls
Mammalian choices: combining fast-but-inaccurate and slow-but-accurate decision-making systems.
Trimmer, Pete C; Houston, Alasdair I; Marshall, James A R; Bogacz, Rafal; Paul, Elizabeth S; Mendl, Mike T; McNamara, John M
2008-10-22
Empirical findings suggest that the mammalian brain has two decision-making systems that act at different speeds. We represent the faster system using standard signal detection theory. We represent the slower (but more accurate) cortical system as the integration of sensory evidence over time until a certain level of confidence is reached. We then consider how two such systems should be combined optimally for a range of information linkage mechanisms. We conclude with some performance predictions that will hold if our representation is realistic.
Sex differences in confidence influence patterns of conformity.
Cross, Catharine P; Brown, Gillian R; Morgan, Thomas J H; Laland, Kevin N
2017-11-01
Lack of confidence in one's own ability can increase the likelihood of relying on social information. Sex differences in confidence have been extensively investigated in cognitive tasks, but implications for conformity have not been directly tested. Here, we tested the hypothesis that, in a task that shows sex differences in confidence, an indirect effect of sex on social information use will also be evident. Participants (N = 168) were administered a mental rotation (MR) task or a letter transformation (LT) task. After providing an answer, participants reported their confidence before seeing the responses of demonstrators and being allowed to change their initial answer. In the MR, but not the LT, task, women showed lower levels of confidence than men, and confidence mediated an indirect effect of sex on the likelihood of switching answers. These results provide novel, experimental evidence that confidence is a general explanatory mechanism underpinning susceptibility to social influences. Our results have implications for the interpretation of the wider literature on sex differences in conformity. © 2016 The British Psychological Society.
Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.
Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian
2005-01-01
To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.
Diaconis, Persi; Holmes, Susan; Janson, Svante
2015-01-01
We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368
Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2018-02-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
Pediatric reference intervals for alkaline phosphatase.
Zierk, Jakob; Arzideh, Farhad; Haeckel, Rainer; Cario, Holger; Frühwald, Michael C; Groß, Hans-Jürgen; Gscheidmeier, Thomas; Hoffmann, Reinhard; Krebs, Alexander; Lichtinghagen, Ralf; Neumann, Michael; Ruf, Hans-Georg; Steigerwald, Udo; Streichert, Thomas; Rascher, Wolfgang; Metzler, Markus; Rauh, Manfred
2017-01-01
Interpretation of alkaline phosphatase activity in children is challenging due to extensive changes with growth and puberty leading to distinct sex- and age-specific dynamics. Continuous percentile charts from birth to adulthood allow accurate consideration of these dynamics and seem reasonable for an analyte as closely linked to growth as alkaline phosphatase. However, the ethical and practical challenges unique to pediatric reference intervals have restricted the creation of such percentile charts, resulting in limitations when clinical decisions are based on alkaline phosphatase activity. We applied an indirect method to generate percentile charts for alkaline phosphatase activity using clinical laboratory data collected during the clinical care of patients. A total of 361,405 samples from 124,440 patients from six German tertiary care centers and one German laboratory service provider measured between January 2004 and June 2015 were analyzed. Measurement of alkaline phosphatase activity was performed on Roche Cobas analyzers using the IFCC's photometric method. We created percentile charts for alkaline phosphatase activity in girls and boys from birth to 18 years which can be used as reference intervals. Additionally, data tables of age- and sex-specific percentile values allow the incorporation of these results into laboratory information systems. The percentile charts provided enable the appropriate differential diagnosis of changes in alkaline phosphatase activity due to disease and changes due to physiological development. After local validation, integration of the provided percentile charts into result reporting facilitates precise assessment of alkaline phosphatase dynamics in pediatrics.
Maternal Confidence for Physiologic Childbirth: A Concept Analysis.
Neerland, Carrie E
2018-06-06
Confidence is a term often used in research literature and consumer media in relation to birth, but maternal confidence has not been clearly defined, especially as it relates to physiologic labor and birth. The aim of this concept analysis was to define maternal confidence in the context of physiologic labor and childbirth. Rodgers' evolutionary method was used to identify attributes, antecedents, and consequences of maternal confidence for physiologic birth. Databases searched included Ovid MEDLINE, CINAHL, PsycINFO, and Sociological Abstracts from the years 1995 to 2015. A total of 505 articles were retrieved, using the search terms pregnancy, obstetric care, prenatal care, and self-efficacy and the keyword confidence. Articles were identified for in-depth review and inclusion based on whether the term confidence was used or assessed in relationship to labor and/or birth. In addition, a hand search of the reference lists of the selected articles was performed. Twenty-four articles were reviewed in this concept analysis. We define maternal confidence for physiologic birth as a woman's belief that physiologic birth can be achieved, based on her view of birth as a normal process and her belief in her body's innate ability to birth, which is supported by social support, knowledge, and information founded on a trusted relationship with a maternity care provider in an environment where the woman feels safe. This concept analysis advances the concept of maternal confidence for physiologic birth and provides new insight into how women's confidence for physiologic birth might be enhanced during the prenatal period. Further investigation of confidence for physiologic birth across different cultures is needed to identify cultural differences in constructions of the concept. © 2018 by the American College of Nurse-Midwives.
Assessing Confidence in Pliocene Sea Surface Temperatures to Evaluate Predictive Models
NASA Technical Reports Server (NTRS)
Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling. M.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.;
2012-01-01
In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.33.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history.This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Anomalous Evidence, Confidence Change, and Theory Change.
Hemmerich, Joshua A; Van Voorhis, Kellie; Wiley, Jennifer
2016-08-01
A novel experimental paradigm that measured theory change and confidence in participants' theories was used in three experiments to test the effects of anomalous evidence. Experiment 1 varied the amount of anomalous evidence to see if "dose size" made incremental changes in confidence toward theory change. Experiment 2 varied whether anomalous evidence was convergent (of multiple types) or replicating (similar finding repeated). Experiment 3 varied whether participants were provided with an alternative theory that explained the anomalous evidence. All experiments showed that participants' confidence changes were commensurate with the amount of anomalous evidence presented, and that larger decreases in confidence predicted theory changes. Convergent evidence and the presentation of an alternative theory led to larger confidence change. Convergent evidence also caused more theory changes. Even when people do not change theories, factors pertinent to the evidence and alternative theories decrease their confidence in their current theory and move them incrementally closer to theory change. Copyright © 2015 Cognitive Science Society, Inc.
The self-consistency model of subjective confidence.
Koriat, Asher
2012-01-01
How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen answer is supported across representations. Confidence is modeled by analogy to the calculation of statistical level of confidence (SLC) in testing hypotheses about a population and represents the participant's assessment of the likelihood that a new sample will yield the same choice. Assuming that participants draw representations from a commonly shared item-specific population of representations, predictions were derived regarding the function relating confidence to inter-participant consensus and intra-participant consistency for the more preferred (majority) and the less preferred (minority) choices. The predicted pattern was confirmed for several different tasks. The confidence-accuracy relationship was shown to be a by-product of the consistency-correctness relationship: It is positive because the answers that are consistently chosen are generally correct, but negative when the wrong answers tend to be favored. The overconfidence bias stems from the reliability-validity discrepancy: Confidence monitors reliability (or self-consistency), but its accuracy is evaluated in calibration studies against correctness. Simulation and empirical results suggest that response speed is a frugal cue for self-consistency, and its validity depends on the validity of self-consistency in predicting performance. Another mnemonic cue-accessibility, which is the overall amount of information that comes to mind-makes an added, independent contribution. Self-consistency and accessibility may correspond to the 2 parameters that affect SLC: sample variance and sample size.
USDA-ARS?s Scientific Manuscript database
Accurate spatially distributed estimates of evapotranspiration (ET) derived from remotely sensed data are critical to a broad range of practical and operational applications. However, due to lengthy return intervals and cloud cover, data acquisition is not continuous over time. To fill the data gaps...
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Trust, confidence, and the 2008 global financial crisis.
Earle, Timothy C
2009-06-01
The 2008 global financial crisis has been compared to a "once-in-a-century credit tsunami," a disaster in which the loss of trust and confidence played key precipitating roles and the recovery from which will require the restoration of these crucial factors. Drawing on the analogy between the financial crisis and environmental and technological hazards, recent research on the role of trust and confidence in the latter is used to provide a perspective on the former. Whereas "trust" and "confidence" are used interchangeably and without explicit definition in most discussions of the financial crisis, this perspective uses the TCC model of cooperation to clearly distinguish between the two and to demonstrate how this distinction can lead to an improved understanding of the crisis. The roles of trust and confidence-both in precipitation and in possible recovery-are discussed for each of the three major sets of actors in the crisis, the regulators, the banks, and the public. The roles of trust and confidence in the larger context of risk management are also examined; trust being associated with political approaches, confidence with technical. Finally, the various stances that government can take with regard to trust-such as supportive or skeptical-are considered. Overall, it is argued that a clear understanding of trust and confidence and a close examination of the specific, concrete circumstances of a crisis-revealing when either trust or confidence is appropriate-can lead to useful insights for both recovery and prevention of future occurrences.
Basic Confidence Predictors of Career Decision-Making Self-Efficacy
ERIC Educational Resources Information Center
Paulsen, Alisa M.; Betz, Nancy E.
2004-01-01
The extent to which Basic Confidence Scales predicted career decision-making self-efficacy was studied in a sample of 627 undergraduate students. Six confidence variables accounted for 49% of the variance in career decision-making self-efficacy. Leadership confidence was the most important, but confidence in science, mathematics, writing, using…
Fetal sex determination in twin pregnancies using cell free fetal DNA analysis.
Milan, Miguel; Mateu, Emilia; Blesa, David; Clemente-Ciscar, Monica; Simon, Carlos
2018-04-23
We sought to develop an accurate sex classification method in twin pregnancies using data obtained from a standard commercial non-invasive prenatal test. A total of 706 twin pregnancies were included in this retrospective analytical data study. Normalized chromosome values for chromosomes X and Y were used and adapted into a sex-score to predict fetal sex in each fetus, and results were compared with the clinical outcome at birth. Outcome information at birth for sex chromosomes was available for 232 twin pregnancies. From these, a total of 173 twin pregnancies with a Y chromosome identified in non-invasive pregnancy testing were used for the development of a predictive model. Global accuracy for sex classification in the testing set with 51 samples was 0.98 (95% confidence interval [0.90,0.99]), with a specificity and sensitivity of 1 (95% confidence interval [0.82,1.00]) and 0.97 (95% confidence interval [0.84,0.99]), respectively. While non-invasive prenatal testing is a screening method and confirmatory results must be obtained by ultrasound or genetic diagnosis, the sex-score determination presented herein offers an accurate and useful approach to characterizing fetus sex in twin pregnancies in a non-invasive manner early on in pregnancy. © 2018 John Wiley & Sons, Ltd.
Confidence level estimation in multi-target classification problems
NASA Astrophysics Data System (ADS)
Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia
2018-04-01
This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.
49 CFR 1103.23 - Confidences of a client.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to the...
Mammography interval and breast cancer mortality in women over the age of 75.
Simon, Michael S; Wassertheil-Smoller, Sylvia; Thomson, Cynthia A; Ray, Roberta M; Hubbell, F Allan; Lessin, Lawrence; Lane, Dorothy S; Kuller, Lew H
2014-11-01
The purpose of this study is to evaluate the relationship between mammography interval and breast cancer mortality among older women with breast cancer. The study population included 1,914 women diagnosed with invasive breast cancer at age 75 or later during their participation in the Women's health initiative, with an average follow-up of 4.4 years (3.1 SD). Cause of death was based on medical record review. Mammography interval was defined as the time between the last self-reported mammogram 7 or more months prior to diagnosis, and the date of diagnosis. Multivariable adjusted hazard ratios (HR) and 95 % confidence intervals (CIs) for breast cancer mortality and all-cause mortality were computed from Cox proportional hazards analyses. Prior mammograms were reported by 73.0 % of women from 7 months to ≤2 year of diagnosis (referent group), 19.4 % (>2 to <5 years), and 7.5 % (≥5 years or no prior mammogram). Women with the longest versus shortest intervals had more poorly differentiated (28.5 % vs. 22.7 %), advanced stage (25.7 % vs. 22.9 %), and estrogen receptor negative tumors (20.9 % vs. 13.1 %). Compared to the referent group, women with intervals of >2 to <5 years or ≥5 years had an increased risk of breast cancer mortality (HR 1.62, 95 % CI 1.03-2.54) and (HR 2.80, 95 % CI 1.57-5.00), respectively, p trend = 0.0002. There was no significant relationship between mammography interval and other causes of death. These results suggest a continued role for screening mammography among women 75 years of age and older.
Food skills confidence and household gatekeepers' dietary practices.
Burton, Melissa; Reid, Mike; Worsley, Anthony; Mavondo, Felix
2017-01-01
Household food gatekeepers have the potential to influence the food attitudes and behaviours of family members, as they are mainly responsible for food-related tasks in the home. The aim of this study was to determine the role of gatekeepers' confidence in food-related skills and nutrition knowledge on food practices in the home. An online survey was completed by 1059 Australian dietary gatekeepers selected from the Global Market Insite (GMI) research database. Participants responded to questions about food acquisition and preparation behaviours, the home eating environment, perceptions and attitudes towards food, and demographics. Two-step cluster analysis was used to identify groups based on confidence regarding food skills and nutrition knowledge. Chi-square tests and one-way ANOVAs were used to compare the groups on the dependent variables. Three groups were identified: low confidence, moderate confidence and high confidence. Gatekeepers in the highest confidence group were significantly more likely to report lower body mass index (BMI), and indicate higher importance of fresh food products, vegetable prominence in meals, product information use, meal planning, perceived behavioural control and overall diet satisfaction. Gatekeepers in the lowest confidence group were significantly more likely to indicate more perceived barriers to healthy eating, report more time constraints and more impulse purchasing practices, and higher convenience ingredient use. Other smaller associations were also found. Household food gatekeepers with high food skills confidence were more likely to engage in several healthy food practices, while those with low food skills confidence were more likely to engage in unhealthy food practices. Food education strategies aimed at building food-skills and nutrition knowledge will enable current and future gatekeepers to make healthier food decisions for themselves and for their families. Copyright © 2016 Elsevier Ltd. All rights reserved.
Waugh, E J; Badley, E M; Borkhoff, C M; Croxford, R; Davis, A M; Dunn, S; Gignac, M A; Jaglal, S B; Sale, J; Hawker, G A
2016-03-01
The purpose of this study is to examine the perceptions of primary care physicians (PCPs) regarding indications, contraindications, risks and benefits of total joint arthroplasty (TJA) and their confidence in selecting patients for referral for TJA. PCPs recruited from among those providing care to participants in an established community cohort with hip or knee osteoarthritis (OA). Self-completed questionnaires were used to collect demographic and practice characteristics and perceptions about TJA. Confidence in referring appropriate patients for TJA was measured on a scale from 1 to 10; respondents scoring in the lowest tertile were considered to have 'low confidence'. Descriptive analyses were conducted and multiple logistic regression was used to determine key predictors of low confidence. 212 PCPs participated (58% response rate) (65% aged 50+ years, 45% female, 77% >15 years of practice). Perceptions about TJA were highly variable but on average, PCPs perceived that a typical surgical candidate would have moderate pain and disability, identified few absolute contraindications to TJA, and overestimated both the effectiveness and risks of TJA. On average, PCPs indicated moderate confidence in deciding who to refer. Independent predictors of low confidence were female physicians (OR = 2.18, 95% confidence interval (CI): 1.06-4.46) and reporting a 'lack of clarity about surgical indications' (OR = 3.54, 95% CI: 1.87-6.66). Variability in perceptions and lack of clarity about surgical indications underscore the need for decision support tools to inform PCP - patient decision making regarding referral for TJA. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Dinsmore, Daniel L.; Parkinson, Meghan M.
2013-01-01
Although calibration has been widely studied, questions remain about how best to capture confidence ratings, how to calculate continuous variable calibration indices, and on what exactly students base their reported confidence ratings. Undergraduates in a research methods class completed a prior knowledge assessment, two sets of readings and…
True and false memories, parietal cortex, and confidence judgments
Urgolites, Zhisen J.; Smith, Christine N.
2015-01-01
Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory). Accordingly, it has often been difficult to know whether a finding is related to memory confidence or memory accuracy. In the current study, participants made recognition memory judgments with confidence ratings in response to previously studied scenes and novel scenes. The left hippocampus and 16 other brain regions distinguished true and false memories when confidence ratings were different for the two conditions. Only three regions (all in the parietal cortex) distinguished true and false memories when confidence ratings were equated. These findings illustrate the utility of taking confidence ratings into account when identifying brain regions associated with true and false memories. Neural correlates of true and false memories are most easily interpreted when confidence ratings are similar for the two kinds of memories. PMID:26472645
Patients and medical statistics. Interest, confidence, and ability.
Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert
2005-11-01
People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. Survey with retest after approximately 2 weeks. Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test-retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's alpha=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data.
Confidence and memory: assessing positive and negative correlations.
Roediger, Henry L; DeSoto, K Andrew
2014-01-01
The capacity to learn and remember surely evolved to help animals solve problems in their quest to reproduce and survive. In humans we assume that metacognitive processes also evolved, so that we know when to trust what we remember (i.e., when we have high confidence in our memories) and when not to (when we have low confidence). However this latter feature has been questioned by researchers, with some finding a high correlation between confidence and accuracy in reports from memory and others finding little to no correlation. In two experiments we report a recognition memory paradigm that, using the same materials (categorised lists), permits the study of positive correlations, zero correlations, and negative correlations between confidence and accuracy within the same procedure. We had subjects study words from semantic categories with the five items most frequently produced in norms omitted from the list; later, subjects were given an old/new recognition test and made confidence ratings on their judgements. Although the correlation between confidence and accuracy for studied items was generally positive, the correlation for the five omitted items was negative in some methods of analysis. We pinpoint the similarity between lures and targets as creating inversions between confidence and accuracy in memory. We argue that, while confidence is generally a useful indicant of accuracy in reports from memory, in certain environmental circumstances even adaptive processes can foster illusions of memory. Thus understanding memory illusions is similar to understanding perceptual illusions: Processes that are usually adaptive can go awry under certain circumstances.
Yu, Jingkai; Finley, Russell L
2009-01-01
High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.
Petersen, Christian C; Mistlberger, Ralph E
2017-08-01
The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.
Alturki, Reem; Schandelmaier, Stefan; Olu, Kelechi Kalu; von Niederhäusern, Belinda; Agarwal, Arnav; Frei, Roy; Bhatnagar, Neera; Hooft, Lotty; von Elm, Erik; Briel, Matthias
2017-01-01
discontinued (vs. other label) in corresponding trial registry records improved over time (adjusted odds ratio 1.16 per year, confidence interval 1.04-1.30) and was possibly associated with industry sponsorship (2.01, 0.99-4.07) but unlikely with multicenter status (0.81, 0.32-2.04) or sample size (1.07, 0.89-1.29). Less than half of published discontinued RCTs were accurately labelled as discontinued in corresponding registry records. One-third of registry records provided a reason for discontinuation. Current trial status information in registries should be viewed with caution. Copyright © 2016 Elsevier Inc. All rights reserved.
Miller, William C; Deathe, A Barry; Speechley, Mark
2003-05-01
To evaluate the internal consistency, test-retest reliability, and construct validity of the Activities-specific Balance Confidence (ABC) Scale among people who have a lower-limb amputation. Retest design. A university-affiliated outpatient amputee clinic in Ontario. Two samples of individuals who have unilateral transtibial and transfemoral amputation. Sample 1 (n=54) was a consecutive and sample 2 (n=329) a convenience sample of all members of the clinic population. Not applicable. Repeated application of the ABC Scale, a 16-item questionnaire that assesses confidence in performing various mobility-related tasks. Correlation to test hypothesized relationships between the ABC Scale and the 2-minute walk (2MWT) and the timed up-and-go (TUG) tests; and assessment of the ability of the ABC Scale to discriminate among groups based on amputation cause, amputation level, mobility device use, automatic stepping ability, wearing time, stair climbing ability, and walking distance. Test-retest reliability (intraclass correlation coefficient) of the ABC Scale was .91 (95% confidence interval [CI], .84-.95) with individual item test-retest coefficients ranging from .53 to .87. Internal consistency, measured by Cronbach alpha, was .95. Hypothesized associations with the 2MWT and TUG test were observed with correlations of .72 (95% CI, .56-.84) and -.70 (95% CI, -.82 to -.53), respectively. The ABC Scale discriminated between all groups except those based on amputation level. Balance confidence, as measured by the ABC Scale, is a construct that provides unique information potentially useful to clinicians who provide amputee rehabilitation. The ABC Scale is reliable, with strong support for validity. Study of the scale's responsiveness is recommended.
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gallo, David A; Cramer, Stefanie J; Wong, Jessica T; Bennett, David A
2012-07-01
Alzheimer's disease (AD) can impair metacognition in addition to more basic cognitive functions like memory. However, while global metacognitive inaccuracies are well documented (i.e., low deficit awareness, or anosognosia), the evidence is mixed regarding the effects of AD on local or task-based metacognitive judgments. Here we investigated local metacognition with respect to the confidence-accuracy relationship in episodic memory (i.e., metamemory). AD and control participants studied pictures of common objects and their verbal labels, and then took forced-choice picture recollection tests using the verbal labels as retrieval cues. We found that item-based confidence judgments discriminated between accurate and inaccurate recollection responses in both groups, implicating relatively spared metamemory in AD. By contrast, there was evidence for global metacognitive deficiencies, as AD participants underestimated the severity of their everyday problems compared to an informant's assessment. Within the AD group, individual differences in global metacognition were related to recollection accuracy, and global metacognition for everyday memory problems was related to task-based metacognitive accuracy. These findings suggest that AD can spare the confidence-accuracy relationship in recollection tasks, and that global and local metacognition measures tap overlapping neuropsychological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
The KFM, A Homemade Yet Accurate and Dependable Fallout Meter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kearny, C.H.
The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy ofmore » {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify
High-intensity cycle interval training improves cycling and running performance in triathletes.
Etxebarria, Naroa; Anson, Judith M; Pyne, David B; Ferguson, Richard A
2014-01-01
Effective cycle training for triathlon is a challenge for coaches. We compared the effects of two variants of cycle high-intensity interval training (HIT) on triathlon-specific cycling and running. Fourteen moderately-trained male triathletes ([Formula: see text]O2peak 58.7 ± 8.1 mL kg(-1) min(-1); mean ± SD) completed on separate occasions a maximal incremental test ([Formula: see text]O2peak and maximal aerobic power), 16 × 20 s cycle sprints and a 1-h triathlon-specific cycle followed immediately by a 5 km run time trial. Participants were then pair-matched and assigned randomly to either a long high-intensity interval training (LONG) (6-8 × 5 min efforts) or short high-intensity interval training (SHORT) (9-11 × 10, 20 and 40 s efforts) HIT cycle training intervention. Six training sessions were completed over 3 weeks before participants repeated the baseline testing. Both groups had an ∼7% increase in [Formula: see text]O2peak (SHORT 7.3%, ±4.6%; mean, ±90% confidence limits; LONG 7.5%, ±1.7%). There was a moderate improvement in mean power for both the SHORT (10.3%, ±4.4%) and LONG (10.7%, ±6.8%) groups during the last eight 20-s sprints. There was a small to moderate decrease in heart rate, blood lactate and perceived exertion in both groups during the 1-h triathlon-specific cycling but only the LONG group had a substantial decrease in the subsequent 5-km run time (64, ±59 s). Moderately-trained triathletes should use both short and long high-intensity intervals to improve cycling physiology and performance. Longer 5-min intervals on the bike are more likely to benefit 5 km running performance.
Reference interval computation: which method (not) to choose?
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2012-07-11
When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.
Building Scientific Confidence in the Development and ...
Building Scientific Confidence in the Development and Evaluation of Read-Across Using Tox21 Approaches Slide presentation at GlobalChem conference and workshop in Washington, DC on Case Study on Building Scientific Confidence in the Development and Evaluation of Read-Across Using Tox21 Approaches
Preservice Educators' Confidence in Addressing Sexuality Education
ERIC Educational Resources Information Center
Wyatt, Tammy Jordan
2009-01-01
This study examined 328 preservice educators' level of confidence in addressing four sexuality education domains and 21 sexuality education topics. Significant differences in confidence levels across the four domains were found for gender, academic major, sexuality education philosophy, and sexuality education knowledge. Preservice educators…
Simultaneous confidence sets for several effective doses.
Tompsett, Daniel M; Biedermann, Stefanie; Liu, Wei
2018-04-03
Construction of simultaneous confidence sets for several effective doses currently relies on inverting the Scheffé type simultaneous confidence band, which is known to be conservative. We develop novel methodology to make the simultaneous coverage closer to its nominal level, for both two-sided and one-sided simultaneous confidence sets. Our approach is shown to be considerably less conservative than the current method, and is illustrated with an example on modeling the effect of smoking status and serum triglyceride level on the probability of the recurrence of a myocardial infarction. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multiple confidence estimates as indices of eyewitness memory.
Sauer, James D; Brewer, Neil; Weber, Nathan
2008-08-01
Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated. Classification algorithms were applied to participants' confidence data to determine when a confidence value or pattern of confidence values indicated a positive response. Experiment 1 compared confidence group classification accuracy with a binary decision control group's accuracy on a standard old-new face recognition task and found superior accuracy for the confidence group for target-absent trials but not for target-present trials. Experiment 2 used a face mini-lineup task and found reduced target-present accuracy offset by large gains in target-absent accuracy. Using a standard lineup paradigm, Experiments 3 and 4 also found improved classification accuracy for target-absent lineups and, with a more sophisticated algorithm, for target-present lineups. This demonstrates the accessibility of evidence for recognition memory decisions and points to a more sensitive index of memory quality than is afforded by binary decisions.
What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum
Hesterberg, Tim C.
2015-01-01
Bootstrapping has enormous potential in statistics education and practice, but there are subtle issues and ways to go wrong. For example, the common combination of nonparametric bootstrapping and bootstrap percentile confidence intervals is less accurate than using t-intervals for small samples, though more accurate for larger samples. My goals in this article are to provide a deeper understanding of bootstrap methods—how they work, when they work or not, and which methods work better—and to highlight pedagogical issues. Supplementary materials for this article are available online. [Received December 2014. Revised August 2015] PMID:27019512
Corrected confidence bands for functional data using principal components.
Goldsmith, J; Greven, S; Crainiceanu, C
2013-03-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.
Hypercorrection of high-confidence errors in the classroom.
Carpenter, Shana K; Haynes, Cynthia L; Corral, Daniel; Yeung, Kam Leung
2018-05-19
People often have erroneous knowledge about the world that is firmly entrenched in memory and endorsed with high confidence. Although strong errors in memory would seem difficult to "un-learn," evidence suggests that errors are more likely to be corrected through feedback when they are originally endorsed with high confidence compared to low confidence. This hypercorrection effect has been predominantly studied in laboratory settings with general knowledge (i.e., trivia) questions, however, and has not been systematically explored in authentic classroom contexts. In the current study, college students in an introductory horticulture class answered questions about the course content, rated their confidence in their answers, received feedback of the correct answers, and then later completed a posttest. Results revealed a significant hypercorrection effect, along with a tendency for students with higher prior knowledge of the material to express higher confidence in, and in turn more effective correction of, their error responses.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Confidence regions of planar cardiac vectors
NASA Technical Reports Server (NTRS)
Dubin, S.; Herr, A.; Hunt, P.
1980-01-01
A method for plotting the confidence regions of vectorial data obtained in electrocardiology is presented. The 90%, 95% and 99% confidence regions of cardiac vectors represented in a plane are obtained in the form of an ellipse centered at coordinates corresponding to the means of a sample selected at random from a bivariate normal distribution. An example of such a plot for the frontal plane QRS mean electrical axis for 80 horses is also presented.
Confidence-Based Feature Acquisition
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James
2010-01-01
Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).
Hodgkins, Glenn A.; Stewart, Gregory J.; Cohn, Timothy A.; Dudley, Robert W.
2007-01-01
Large amounts of rain fell on southern Maine from the afternoon of April 15, 2007, to the afternoon of April 16, 2007, causing substantial damage to houses, roads, and culverts. This report provides an estimate of the peak flows on two rivers in southern Maine--the Mousam River and the Little Ossipee River--because of their severe flooding. The April 2007 estimated peak flow of 9,230 ft3/s at the Mousam River near West Kennebunk had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 25 years to greater than 500 years. The April 2007 estimated peak flow of 8,220 ft3/s at the Little Ossipee River near South Limington had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 50 years to greater than 500 years.
Assessing confidence in Pliocene sea surface temperatures to evaluate predictive models
Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling M.; Stoll, Danielle K.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; Bragg, Fran J.; Lunt, Daniel J.; Foley, Kevin M.; Riesselman, Christina R.
2012-01-01
In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.3–3.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history. This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.
Frontline nurse managers' confidence and self-efficacy.
Van Dyk, Jennifer; Siedlecki, Sandra L; Fitzpatrick, Joyce J
2016-05-01
This study was focused on determining relationships between confidence levels and self-efficacy among nurse managers. Frontline nurse managers have a pivotal role in delivering high-quality patient care while managing the associated costs and resources. The competency and skill of nurse managers affect every aspect of patient care and staff well-being as nurse managers are largely responsible for creating work environments in which clinical nurses are able to provide high-quality, patient-centred, holistic care. A descriptive, correlational survey design was used; 85 nurse managers participated. Years in a formal leadership role and confidence scores were found to be significant predictors of self-efficacy scores. Experience as a nurse manager is an important component of confidence and self-efficacy. There is a need to develop educational programmes for nurse managers to enhance their self-confidence and self-efficacy, and to maintain experienced nurse managers in the role. © 2016 John Wiley & Sons Ltd.
Weighting Mean and Variability during Confidence Judgments
de Gardelle, Vincent; Mamassian, Pascal
2015-01-01
Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275
Gait consistency over a 7-day interval in people with Parkinson's disease.
Urquhart, D M; Morris, M E; Iansek, R
1999-06-01
To evaluate the consistency of temporal and spatial parameters of the walking pattern in subjects with idiopathic Parkinson's disease (PD) over a 7-day interval during the "on" phase of the levodopa medication cycle. Walking patterns were measured on a 12-meter walkway at the Kingston Gait Laboratory, Cheltenham, using a computerized stride analyzer. Sixteen subjects (7 women, 9 men) with PD recruited from the Movement Disorders Clinic at Kingston Centre. Speed of walking, stride length, cadence, and the percentage of the walking cycle spent in the double limb support phase of gait were measured, together with the level of disability as indexed by the modified Webster scale. Product-moment correlation coefficients and intraclass correlation coefficients (ICC 2,1) for repeat measures over a 7-day interval were high for speed (r = .90; ICC = .93), cadence (r = .90; ICC = .86), and stride length (r = 1.00; ICC = .97) and moderate for double limb support duration after removal of outliers (r = .75; ICC = .73); 95% confidence intervals for the change scores were within clinically acceptable limits for all variables. The mean modified Webster score was 11.4 on the first day and 10.1 7 days later. The gait pattern and level of disability in subjects with PD without severe motor fluctuations remained stable over a 1-week period when optimal medication prevailed.
Confidence and self-attribution bias in an artificial stock market
Bertella, Mario A.; Pires, Felipe R.; Rego, Henio H. A.; Vodenska, Irena; Stanley, H. Eugene
2017-01-01
Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index—both generated by our model—are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant. PMID:28231255
Confidence and self-attribution bias in an artificial stock market.
Bertella, Mario A; Pires, Felipe R; Rego, Henio H A; Silva, Jonathas N; Vodenska, Irena; Stanley, H Eugene
2017-01-01
Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index-both generated by our model-are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant.
Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J
2017-08-01
(14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe
Examining Response Confidence in Multiple Text Tasks
ERIC Educational Resources Information Center
List, Alexandra; Alexander, Patricia A.
2015-01-01
Students' confidence in their responses to a multiple text-processing task and their justifications for those confidence ratings were investigated. Specifically, 215 undergraduates responded to two academic questions, differing by type (i.e., discrete and open-ended) and by domain (i.e., developmental psychology and astrophysics), using a digital…
Accurate determination of imaging modality using an ensemble of text- and image-based classifiers.
Kahn, Charles E; Kalpathy-Cramer, Jayashree; Lam, Cesar A; Eldredge, Christina E
2012-02-01
Imaging modality can aid retrieval of medical images for clinical practice, research, and education. We evaluated whether an ensemble classifier could outperform its constituent individual classifiers in determining the modality of figures from radiology journals. Seventeen automated classifiers analyzed 77,495 images from two radiology journals. Each classifier assigned one of eight imaging modalities--computed tomography, graphic, magnetic resonance imaging, nuclear medicine, positron emission tomography, photograph, ultrasound, or radiograph-to each image based on visual and/or textual information. Three physicians determined the modality of 5,000 randomly selected images as a reference standard. A "Simple Vote" ensemble classifier assigned each image to the modality that received the greatest number of individual classifiers' votes. A "Weighted Vote" classifier weighted each individual classifier's vote based on performance over a training set. For each image, this classifier's output was the imaging modality that received the greatest weighted vote score. We measured precision, recall, and F score (the harmonic mean of precision and recall) for each classifier. Individual classifiers' F scores ranged from 0.184 to 0.892. The simple vote and weighted vote classifiers correctly assigned 4,565 images (F score, 0.913; 95% confidence interval, 0.905-0.921) and 4,672 images (F score, 0.934; 95% confidence interval, 0.927-0.941), respectively. The weighted vote classifier performed significantly better than all individual classifiers. An ensemble classifier correctly determined the imaging modality of 93% of figures in our sample. The imaging modality of figures published in radiology journals can be determined with high accuracy, which will improve systems for image retrieval.
Activities-specific balance confidence scale for predicting future falls in Indian older adults.
Moiz, Jamal Ali; Bansal, Vishal; Noohu, Majumi M; Gaur, Shailendra Nath; Hussain, Mohammad Ejaz; Anwer, Shahnawaz; Alghadir, Ahmad
2017-01-01
Activities-specific balance confidence (ABC) scale is a subjective measure of confidence in performing various ambulatory activities without falling or experiencing a sense of unsteadiness. This study aimed to examine the ability of the Hindi version of the ABC scale (ABC-H scale) to discriminate between fallers and non-fallers and to examine its predictive validity for prospective falls. This was a prospective cohort study. A total of 125 community-dwelling older adults (88 were men) completed the ABC-H scale. The occurrence of falls over the follow-up period of 12 months was recorded. Discriminative validity was analyzed by comparing the total ABC-H scale scores between the faller and non-faller groups. A receiver operating characteristic curve analysis and a logistic regression analysis were used to examine the predictive accuracy of the ABC-H scale. The mean ABC-H scale score of the faller group was significantly lower than that of the non-faller group (52.6±8.1 vs 73.1±12.2; P <0.001). The optimal cutoff value for distinguishing faller and non-faller adults was ≤58.13. The sensitivity, specificity, area under the curve, and positive and negative likelihood ratios of the cutoff score were 86.3%, 87.3%, 0.91 ( P <0.001), 6.84, and 0.16, respectively. The percentage test accuracy and false-positive and false-negative rates were 86.87%, 12.2%, and 13.6%, respectively. A dichotomized total ABC-H scale score of ≤58.13% (adjusted odds ratio =0.032, 95% confidence interval =0.004-0.25, P =0.001) was significantly related with future falls. The ABC-H scores were significantly and independently related with future falls in the community-dwelling Indian older adults. The ability of the ABC-H scale to predict future falls was adequate with high sensitivity and specificity values.
Adding Confidence to Knowledge
ERIC Educational Resources Information Center
Goodson, Ludwika Aniela; Slater, Don; Zubovic, Yvonne
2015-01-01
A "knowledge survey" and a formative evaluation process led to major changes in an instructor's course and teaching methods over a 5-year period. Design of the survey incorporated several innovations, including: a) using "confidence survey" rather than "knowledge survey" as the title; b) completing an instructional…
Decision Making and Confidence Given Uncertain Advice
ERIC Educational Resources Information Center
Lee, Michael D.; Dry, Matthew J.
2006-01-01
We study human decision making in a simple forced-choice task that manipulates the frequency and accuracy of available information. Empirically, we find that people make decisions consistent with the advice provided, but that their subjective confidence in their decisions shows 2 interesting properties. First, people's confidence does not depend…
Can Ultrasound Accurately Assess Ischiofemoral Space Dimensions? A Validation Study.
Finnoff, Jonathan T; Johnson, Adam C; Hollman, John H
2017-04-01
Ischiofemoral impingement is a potential cause of hip and buttock pain. It is evaluated commonly with magnetic resonance imaging (MRI). To our knowledge, no study previously has evaluated the ability of ultrasound to measure the ischiofemoral space (IFS) dimensions reliably. To determine whether ultrasound could accurately measure the IFS dimensions when compared with the gold standard imaging modality of MRI. A methods comparison study. Sports medicine center within a tertiary-care institution. A total of 5 male and 5 female asymptomatic adult subjects (age mean = 29.2 years, range = 23-35 years; body mass index mean = 23.5, range = 19.5-26.6) were recruited to participate in the study. Subjects were secured in a prone position on a MRI table with their hips in a neutral position. Their IFS dimensions were then acquired in a randomized order using diagnostic ultrasound and MRI. The main outcome measurements were the IFS dimensions acquired with ultrasound and MRI. The mean IFS dimensions measured with ultrasound was 29.5 mm (standard deviation [SD] 4.99 mm, standard error mean 1.12 mm), whereas those obtained with MRI were 28.25 mm (SD 5.91 mm, standard error mean 1.32 mm). The mean difference between the ultrasound and MRI measurements was 1.25 mm, which was not statistically significant (SD 3.71 mm, standard error mean 3.71 mm, 95% confidence interval -0.49 mm to 2.98 mm, t 19 = 1.506, P = .15). The Bland-Altman analysis indicated that the 95% limits of agreement between the 2 measurement was -6.0 to 8.5 mm, indicating that there was no systematic bias between the ultrasound and MRI measurements. Our findings suggest that the IFS measurements obtained with ultrasound are very similar to those obtained with MRI. Therefore, when evaluating individuals with suspected ischiofemoral impingement, one could consider using ultrasound to measure their IFS dimensions. III. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier
Predicting resident confidence to lead family meetings.
Butler, D J; Holloway, R L; Gottlieb, M
1998-05-01
Family physicians frequently encounter patients' family members in family meetings regarding health care. Although residents are expected to learn how to interview families, no quantitative studies have examined variables associated with building residents' confidence in their ability to lead family meetings. The current study sought to clarify the relationship between a number of training, participant, and situational components and resident confidence. All family practice residents (n = 90) in a five-residency program system were sent a survey that examined their experience in and perceived competence to conduct family meetings. Responses were analyzed with a hierarchical regression analysis and an ex post facto univariate analysis. Residents with higher perceived confidence in their ability to run a family meeting were male, had specific training for leading family meetings, had participated in and initiated more family meetings, perceived stronger family physician faculty support, and had more family systems training than lower-confidence residents. The results highlight the experiential, curricular, and environmental variables that are associated with building resident confidence to lead family meetings. Residents may benefit from early exposure to the skills needed for family meetings and from reinforcement of these skills through observations of skilled practitioners, the expectation that they will initiate meetings, and the opportunity to debrief meetings with supportive faculty. Family meeting curricula should include conflict management skills and incorporate input from other specialists and hospital personnel who meet with families.
Neural basis for recognition confidence in younger and older adults.
Chua, Elizabeth F; Schacter, Daniel L; Sperling, Reisa A
2009-03-01
Although several studies have examined the neural basis for age-related changes in objective memory performance, less is known about how the process of memory monitoring changes with aging. The authors used functional magnetic resonance imaging to examine retrospective confidence in memory performance in aging. During low confidence, both younger and older adults showed behavioral evidence that they were guessing during recognition and that they were aware they were guessing when making confidence judgments. Similarly, both younger and older adults showed increased neural activity during low- compared to high-confidence responses in the lateral prefrontal cortex, anterior cingulate cortex, and left intraparietal sulcus. In contrast, older adults showed more high-confidence errors than younger adults. Younger adults showed greater activity for high compared to low confidence in medial temporal lobe structures, but older adults did not show this pattern. Taken together, these findings may suggest that impairments in the confidence-accuracy relationship for memory in older adults, which are often driven by high-confidence errors, may be primarily related to altered neural signals associated with greater activity for high-confidence responses.
Neural basis for recognition confidence in younger and older adults
Chua, Elizabeth F.; Schacter, Daniel L.; Sperling, Reisa A.
2008-01-01
Although several studies have examined the neural basis for age-related changes in objective memory performance, less is known about how the process of memory monitoring changes with aging. We used fMRI to examine retrospective confidence in memory performance in aging. During low confidence, both younger and older adults showed behavioral evidence that they were guessing during recognition, and that they were aware they were guessing when making confidence judgments. Similarly, both younger and older adults showed increased neural activity during low compared to high confidence responses in lateral prefrontal cortex, anterior cingulate cortex, and left intraparietal sulcus. In contrast, older adults showed more high confidence errors than younger adults. Younger adults showed greater activity for high compared to low confidence in medial temporal lobe structures, but older adults did not show this pattern. Taken together, these findings may suggest that impairments in the confidence-accuracy relationship for memory in older adults, which are often driven by high confidence errors, may be primarily related to altered neural signals associated with greater activity for high confidence responses. PMID:19290745
Raising Confident, Competent Daughters: Strategies for Parents.
ERIC Educational Resources Information Center
Ransome, Whitney, Ed.; And Others
This booklet contains five essays designed to help parents raise confident, competent daughters. They focus on ways that parents can help their preadolescent and adolescent daughters: (1) speak up in class, articulate their thoughts, and speak with self-confidence in various academic and social situations; (2) develop an interest and aptitude for…
Confidence Wagering during Mathematics and Science Testing
ERIC Educational Resources Information Center
Jack, Brady Michael; Liu, Chia-Ju; Chiu, Hoan-Lin; Shymansky, James A.
2009-01-01
This proposal presents the results of a case study involving five 8th grade Taiwanese classes, two mathematics and three science classes. These classes used a new method of testing called confidence wagering. This paper advocates the position that confidence wagering can predict the accuracy of a student's test answer selection during…
Self-confidence of anglers in identification of freshwater sport fish
Chizinski, C.J.; Martin, D. R.; Pope, Kevin L.
2014-01-01
Although several studies have focused on how well anglers identify species using replicas and pictures, there has been no study assessing the confidence that can be placed in angler's ability to identify recreationally important fish. Understanding factors associated with low self-confidence will be useful in tailoring education programmes to improve self-confidence in identifying common species. The purposes of this assessment were to quantify the confidence of recreational anglers to identify 13 commonly encountered warm water fish species and to relate self-confidence to species availability and angler experience. Significant variation was observed in anglers self-confidence among species and levels of self-declared skill, with greater confidence associated with greater skill and with greater exposure. This study of angler self-confidence strongly highlights the need for educational programmes that target lower skilled anglers and the importance of teaching all anglers about less common species, regardless of skill level.
Extended-Interval Gentamicin Dosing in Achieving Therapeutic Concentrations in Malaysian Neonates
Tan, Sin Li; Wan, Angeline SL
2015-01-01
OBJECTIVE: To evaluate the usefulness of extended-interval gentamicin dosing practiced in neonatal intensive care unit (NICU) and special care nursery (SCN) of a Malaysian hospital. METHODS: Cross-sectional observational study with pharmacokinetic analysis of all patients aged ≤28 days who received gentamicin treatment in NICU/SCN. Subjects received dosing according to a regimen modified from an Australian-based pediatric guideline. During a study period of 3 months, subjects were evaluated for gestational age, body weight, serum creatinine concentration, gentamicin dose/interval, serum peak and trough concentrations, and pharmacokinetic parameters. Descriptive percentages were used to determine the overall dosing accuracy, while analysis of variance (ANOVA) was conducted to compare the accuracy rates among different gestational ages. Pharmacokinetic profile among different gestational age and body weight groups were compared by using ANOVA. RESULTS: Of the 113 subjects included, 82.3% (n = 93) achieved therapeutic concentrations at the first drug-monitoring assessment. There was no significant difference found between the percentage of term neonates who achieved therapeutic concentrations and the premature group (87.1% vs. 74.4%), p = 0.085. A total of 112 subjects (99.1%) achieved desired therapeutic trough concentration of <2 mg/L. Mean gentamicin peak concentration was 8.52 mg/L (95% confidence interval [Cl], 8.13–8.90 mg/L) and trough concentration was 0.54 mg/L (95% CI, 0.48–0.60 mg/L). Mean volume of distribution, half-life, and elimination rate were 0.65 L/kg (95% CI, 0.62–0.68 L/kg), 6.96 hours (95% CI, 6.52–7.40 hours), and 0.11 hour−1 (95% CI, 0.10–0.11 hour−1), respectively. CONCLUSION: The larger percentage of subjects attaining therapeutic range with extended-interval gentamicin dosing suggests that this regimen is appropriate and can be safely used among Malaysian neonates. PMID:25964729
Self-Validating Thermocouples for Assured Measurement Confidence and Extended Useful Life
NASA Astrophysics Data System (ADS)
Elliott, C. J.; Pearce, J. V.; Machin, G.; Schwarz, C.; Lindner, R.
2012-07-01
Accurate measurements of temperatures above 1500 °C pose unique and challenging requirements in space. Tungsten-rhenium (W-Re) thermocouples, which are commonly used, quickly exhibit significant thermoelectric inhomogeneity and drift. To address this issue, the National Physical Laboratory in cooperation with ESA/ESTEC is developing an innovative method of validating the performance of high-temperature thermocouples in-situ. The results of measurements using eutectic metal-carbon fixed-point cells containing Co-C (~1324 °C), Pt-C (~1738 °C), Ru-C (~1953 °C) and Ir-C (~2292°C) ingots incorporated onto the thermocouple in use are presented. By monitoring the thermoelectric signal each time the thermal environment passes through the melting temperature of the ingot, the user observes the degree of drift. This assures measurement confidence and extends the useful life of the thermocouple as the drift may be corrected for, if necessary. This approach opens the possibility for improved temperature measurement for ESA/ESTEC research applications and industrial use.
Semmler, Carolyn; Brewer, Neil; Wells, Gary L
2004-04-01
Two experiments investigated new dimensions of the effect of confirming feedback on eyewitness identification confidence using target-absent and target-present lineups and (previously unused) unbiased witness instructions (i.e., "offender not present" option highlighted). In Experiment 1, participants viewed a crime video and were later asked to try to identify the thief from an 8-person target-absent photo array. Feedback inflated witness confidence for both mistaken identifications and correct lineup rejections. With target-present lineups in Experiment 2, feedback inflated confidence for correct and mistaken identifications and lineup rejections. Although feedback had no influence on the confidence-accuracy correlation, it produced clear overconfidence. Confidence inflation varied with the confidence measure reference point (i.e., retrospective vs. current confidence) and identification response latency.
Fransen, Katrien; Decroos, Steven; Vanbeselaere, Norbert; Vande Broek, Gert; De Cuyper, Bert; Vanroy, Jari; Boen, Filip
2015-01-01
The present manuscript extends previous research on the reciprocal relation between team confidence and perceived team performance in two ways. First, we distinguished between two types of team confidence; process-oriented collective efficacy and outcome-oriented team outcome confidence. Second, we assessed both types not only before and after the game, but for the first time also during half-time, thereby providing deeper insight into their dynamic relation with perceived team performance. Two field studies were conducted, each with 10 male soccer teams (N = 134 in Study 1; N = 125 in Study 2). Our findings provide partial support for the reciprocal relation between players' team confidence (both collective efficacy and team outcome confidence) and players' perceptions of the team's performance. Although both types of players' team confidence before the game were not significantly related to perceived team performance in the first half, players' team confidence during half-time was positively related to perceived team performance in the second half. Additionally, our findings consistently demonstrated a relation between perceived team performance and players' subsequent team confidence. Considering that team confidence is a dynamical process, which can be affected by coaches and players, our findings open new avenues to optimise team performance.
Efficiency of time-lapse intervals and simple baits for camera surveys of wild pigs
Williams, B.L.; Holtfreter, R.W.; Ditchkoff, S.S.; Grand, J.B.
2011-01-01
Growing concerns surrounding established and expanding populations of wild pigs (Sus scrofa) have created the need for rapid and accurate surveys of these populations. We conducted surveys of a portion of the wild pig population on Fort Benning, Georgia, to determine if a longer time-lapse interval than had been previously used in surveys of wild pigs would generate similar detection results. We concurrently examined whether use of soured corn at camera sites affected the time necessary for pigs to locate a new camera site or the time pigs remained at a site. Our results suggest that a 9-min time-lapse interval generated dependable detection results for pigs and that soured corn neither attracted pigs to a site any quicker than plain, dry, whole-kernel corn, nor held them at a site longer. Maximization of time-lapse interval should decrease data and processing loads, and use of a simple, available bait should decrease cost and effort associated with more complicated baits; combination of these concepts should increase efficiency of wild pig surveys. ?? 2011 The Wildlife Society.
Registered nurse leadership style and confidence in delegation.
Saccomano, Scott J; Pinto-Zipp, Genevieve
2011-05-01
Leadership and confidence in delegation are two important explanatory constructs of nursing practice. The relationship between these constructs, however, is not clearly understood. To be successful in their roles as leaders, regardless of their experience, registered nurses (RNs) need to understand how to best delegate. The present study explored and described the relationship between RN leadership styles, demographic variables and confidence in delegation in a community teaching hospital. Utilizing a cross-sectional survey design, RNs employed in one acute care hospital completed questionnaires that measured leadership style [Path-Goal Leadership Questionnaire (PGLQ)] and confidence in delegating patient care tasks [Confidence and Intent to Delegate Scale (CIDS)]. Contrary to expectations, the data did not confirm a relationship between confidence in delegating tasks to unlicensed assistive personnel (UAPs) and leadership style. Nurses who were diploma or associate degree prepared were initially less confident in delegating tasks to UAPs as compared with RNs holding a bachelor's degree or higher. Further, after 5 years of clinical nursing experience, nurses with less educational experience reported more confidence in delegating tasks as compared with RNs with more educational experience. The lack of a relationship between leadership style and confidence in delegating patient care tasks were discussed in terms of the PGLQ classification criteria and hospital unit differences. As suggested by the significant two-way interaction between educational preparation and clinical nursing experience, changes in the nurse's confidence in delegating patient care tasks to UAPs was a dynamic changing variable that resulted from the interplay between amount of educational preparation and years of clinical nursing experience in this population of nurses. Clearly, generalizability of these findings to nurses outside the US is questionable, thus nurse managers must be familiar
Improved Margin of Error Estimates for Proportions in Business: An Educational Example
ERIC Educational Resources Information Center
Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael
2015-01-01
This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…
Detecting Disease in Radiographs with Intuitive Confidence
2015-01-01
This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45°) represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned) input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays. PMID:26495433
Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.
Pauly, Markus; Asendorf, Thomas; Konietschke, Frank
2016-11-01
We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High confidence in falsely recognizing prototypical faces.
Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen
2018-06-01
We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.
Confidence mediates the sex difference in mental rotation performance.
Estes, Zachary; Felker, Sydney
2012-06-01
On tasks that require the mental rotation of 3-dimensional figures, males typically exhibit higher accuracy than females. Using the most common measure of mental rotation (i.e., the Mental Rotations Test), we investigated whether individual variability in confidence mediates this sex difference in mental rotation performance. In each of four experiments, the sex difference was reliably elicited and eliminated by controlling or manipulating participants' confidence. Specifically, confidence predicted performance within and between sexes (Experiment 1), rendering confidence irrelevant to the task reliably eliminated the sex difference in performance (Experiments 2 and 3), and manipulating confidence significantly affected performance (Experiment 4). Thus, confidence mediates the sex difference in mental rotation performance and hence the sex difference appears to be a difference of performance rather than ability. Results are discussed in relation to other potential mediators and mechanisms, such as gender roles, sex stereotypes, spatial experience, rotation strategies, working memory, and spatial attention.
Zhang, Junqian; Rosen, Alex; Orenstein, Lauren; Van Voorhees, Abby; Miller, Christopher J; Sobanko, Joseph F; Shin, Thuzar M; Etzkorn, Jeremy R
2016-06-01
Biopsy site identification is critical to avoid wrong-site surgery and may impact patient-centered outcomes. We sought to evaluate risk factors for biopsy site misidentification, postponement of surgery, and patient confidence in surgical site selection and to assess the near-miss rate for wrong-site surgeries. This was a prospective observational cohort study. Near-miss wrong-site surgeries were detected and averted in 1.3% (3 of 239) of patients with biopsy site photographs. Risk factors for biopsy site misidentification by patients were 6 weeks or longer between biopsy and surgery (odds ratio [OR] 2.19, 95% confidence interval [CI] 1.12-4.27; P = .028) and patient inability to see biopsy site (OR 3.95, 95% CI 1.50-10.37; P = .002). Risk factors for physician misidentification were 6 or more weeks between biopsy and surgery (OR 3.68, 95% CI 1.40-9.66; P = .007) and biopsy specimens from multiple sites (OR 4.39, 95% CI 1.67-11.54; P = .003). Postponement of surgery was associated with absence of a biopsy site photograph (OR 12.5, 95% CI 2.79-62.21; P < .001). Patient confidence in surgical site identification was associated with the presence of a biopsy site photograph (OR 5.48, 95% CI 1.96-15.30; P = .001). This was a single-site observational study. Biopsy site photography is associated with reduced rates of postponed surgeries and improved rates of patient confidence in surgical site selection. Risk factors for biopsy site misidentification should be considered before definitive treatment. Copyright © 2015 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Aging and Confidence Judgments in Item Recognition
ERIC Educational Resources Information Center
Voskuilen, Chelsea; Ratcliff, Roger; McKoon, Gail
2018-01-01
We examined the effects of aging on performance in an item-recognition experiment with confidence judgments. A model for confidence judgments and response time (RTs; Ratcliff & Starns, 2013) was used to fit a large amount of data from a new sample of older adults and a previously reported sample of younger adults. This model of confidence…
Thoresen, Stein I; Arnemo, Jon M; Liberg, Olof
2009-06-01
Scandinavian free-ranging wolves (Canis lupus) are endangered, such that laboratory data to assess their health status is increasingly important. Although wolves have been studied for decades, most biological information comes from captive animals. The objective of the present study was to establish reference intervals for 30 clinical chemical and 8 hematologic analytes in Scandinavian free-ranging wolves. All wolves were tracked and chemically immobilized from a helicopter before examination and blood sampling in the winter of 7 consecutive years (1998-2004). Seventy-nine blood samples were collected from 57 gray wolves, including 24 juveniles (24 samples), 17 adult females (25 samples), and 16 adult males (30 samples). Whole blood and serum samples were stored at refrigeration temperature for 1-3 days before hematologic analyses and for 1-5 days before serum biochemical analyses. Reference intervals were calculated as 95% confidence intervals except for juveniles where the minimum and maximum values were used. Significant differences were observed between adult and juvenile wolves for RBC parameters, alkaline phosphatase and amylase activities, and total protein, albumin, gamma-globulins, cholesterol, creatinine, calcium, chloride, magnesium, phosphate, and sodium concentrations. Compared with published reference values for captive wolves, reference intervals for free-ranging wolves reflected exercise activity associated with capture (higher creatine kinase activity, higher glucose concentration), and differences in nutritional status (higher urea concentration).
Assessing Undergraduate Students' Conceptual Understanding and Confidence of Electromagnetics
ERIC Educational Resources Information Center
Leppavirta, Johanna
2012-01-01
The study examines how students' conceptual understanding changes from high confidence with incorrect conceptions to high confidence with correct conceptions when reasoning about electromagnetics. The Conceptual Survey of Electricity and Magnetism test is weighted with students' self-rated confidence on each item in order to infer how strongly…
Accurate registration of temporal CT images for pulmonary nodules detection
NASA Astrophysics Data System (ADS)
Yan, Jichao; Jiang, Luan; Li, Qiang
2017-02-01
Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.