Science.gov

Sample records for 95-percent confidence interval

  1. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  2. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Thompson, Bruce

    2007-01-01

    The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

  3. Confidence Trick: The Interpretation of Confidence Intervals

    ERIC Educational Resources Information Center

    Foster, Colin

    2014-01-01

    The frequent misinterpretation of the nature of confidence intervals by students has been well documented. This article examines the problem as an aspect of the learning of mathematical definitions and considers the tension between parroting mathematically rigorous, but essentially uninternalized, statements on the one hand and expressing…

  4. Teaching Confidence Intervals Using Simulation

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2008-01-01

    Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

  5. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  6. Constructing Confidence Intervals for Qtl Location

    PubMed Central

    Mangin, B.; Goffinet, B.; Rebai, A.

    1994-01-01

    We describe a method for constructing the confidence interval of the QTL location parameter. This method is developed in the local asymptotic framework, leading to a linear model at each position of the putative QTL. The idea is to construct a likelihood ratio test, using statistics whose asymptotic distribution does not depend on the nuisance parameters and in particular on the effect of the QTL. We show theoretical properties of the confidence interval built with this test, and compare it with the classical confidence interval using simulations. We show in particular, that our confidence interval has the correct probability of containing the true map location of the QTL, for almost all QTLs, whereas the classical confidence interval can be very biased for QTLs having small effect. PMID:7896108

  7. Sampling Theory and Confidence Intervals for Effect Sizes: Using ESCI To Illustrate "Bouncing"; Confidence Intervals.

    ERIC Educational Resources Information Center

    Du, Yunfei

    This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…

  8. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  9. Computation of confidence intervals for Poisson processes

    NASA Astrophysics Data System (ADS)

    Aguilar-Saavedra, J. A.

    2000-07-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  10. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  11. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  12. Toward Using Confidence Intervals to Compare Correlations

    ERIC Educational Resources Information Center

    Zou, Guang Yong

    2007-01-01

    Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate…

  13. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  14. A primer on confidence intervals in psychopharmacology.

    PubMed

    Andrade, Chittaranjan

    2015-02-01

    Research papers and research summaries frequently present results in the form of data accompanied by 95% confidence intervals (CIs). Not all students and clinicians know how to interpret CIs. This article provides a nontechnical, nonmathematical discussion on how to understand and glean information from CIs; all explanations are accompanied by simple examples. A statistically accurate explanation about CIs is also provided. CIs are differentiated from standard deviations, standard errors, and confidence levels. The interpretation of narrow and wide CIs is discussed. Factors that influence the width of a CI are listed. Explanations are provided for how CIs can be used to assess statistical significance. The significance of overlapping and nonoverlapping CIs is considered. It is concluded that CIs are far more informative than, say, mere P values when drawing conclusions about a result.

  15. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    PubMed Central

    Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K.

    2016-01-01

    For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

  16. Confidence intervals for ATR performance metrics

    NASA Astrophysics Data System (ADS)

    Ross, Timothy D.

    2001-08-01

    This paper describes confidence interval (CI) estimators (CIEs) for the metrics used to assess sensor exploitation algorithm (or ATR) performance. For the discrete distributions, small sample sizes and extreme outcomes encountered within ATR testing, the commonly used CIEs have limited accuracy. This paper makes available CIEs that are accurate over all conditions of interest to the ATR community. The approach is to search for CIs using an integration of the Bayesian posterior (IBP) to measure alpha (chance of the CI not containing the true value). The CIEs provided include proportion estimates based on Binomial distributions and rate estimates based on Poisson distributions. One or two-sided CIs may be selected. For two-sided CIEs, either minimal length, balanced tail probabilities, or balanced width may be selected. The CIEs' accuracies are reported based on a Monte Carlo validated integration of the posterior probability distribution and compared to the Normal approximation and `exact' (Clopper- Pearson) methods. While the IBP methods are accurate throughout, the conventional methods may realize alphas with substantial error (up to 50%). This translates to 10 to 15% error in the CI widths or to requiring 10 to 15% more samples for a given confidence level.

  17. Computing confidence intervals for standardized regression coefficients.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2013-12-01

    With fixed predictors, the standard method (Cohen, Cohen, West, & Aiken, 2003, p. 86; Harris, 2001, p. 80; Hays, 1994, p. 709) for computing confidence intervals (CIs) for standardized regression coefficients fails to account for the sampling variability of the criterion standard deviation. With random predictors, this method also fails to account for the sampling variability of the predictor standard deviations. Nevertheless, under some conditions the standard method will produce CIs with accurate coverage rates. To delineate these conditions, we used a Monte Carlo simulation to compute empirical CI coverage rates in samples drawn from 36 populations with a wide range of data characteristics. We also computed the empirical CI coverage rates for 4 alternative methods that have been discussed in the literature: noncentrality interval estimation, the delta method, the percentile bootstrap, and the bias-corrected and accelerated bootstrap. Our results showed that for many data-parameter configurations--for example, sample size, predictor correlations, coefficient of determination (R²), orientation of β with respect to the eigenvectors of the predictor correlation matrix, RX--the standard method produced coverage rates that were close to their expected values. However, when population R² was large and when β approached the last eigenvector of RX, then the standard method coverage rates were frequently below the nominal rate (sometimes by a considerable amount). In these conditions, the delta method and the 2 bootstrap procedures were consistently accurate. Results using noncentrality interval estimation were inconsistent. In light of these findings, we recommend that researchers use the delta method to evaluate the sampling variability of standardized regression coefficients.

  18. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

    ERIC Educational Resources Information Center

    Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

    2012-01-01

    The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

  19. Calculating Confidence Intervals for Effect Sizes Using Noncentral Distributions.

    ERIC Educational Resources Information Center

    Norris, Deborah

    This paper provides a brief review of the concepts of confidence intervals, effect sizes, and central and noncentral distributions. The use of confidence intervals around effect sizes is discussed. A demonstration of the Exploratory Software for Confidence Intervals (G. Cuming and S. Finch, 2001; ESCI) is given to illustrate effect size confidence…

  20. IET. Aerial view of project, 95 percent complete. Camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Aerial view of project, 95 percent complete. Camera facing east. Left to right: stack, duct, mobile test cell building (TAN-624), four-rail track, dolly. Retaining wall between mobile test building and shielded control building (TAN-620) just beyond. North of control building are tank building (TAN-627) and fuel-transfer pump building (TAN-625). Guard house at upper right along exclusion fence. Construction vehicles and temporary warehouse in view near guard house. Date: June 6, 1955. INEEL negative no. 55-1462 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  1. Contrasting diversity values: statistical inferences based on overlapping confidence intervals.

    PubMed

    MacGregor-Fors, Ian; Payton, Mark E

    2013-01-01

    Ecologists often contrast diversity (species richness and abundances) using tests for comparing means or indices. However, many popular software applications do not support performing standard inferential statistics for estimates of species richness and/or density. In this study we simulated the behavior of asymmetric log-normal confidence intervals and determined an interval level that mimics statistical tests with P(α) = 0.05 when confidence intervals from two distributions do not overlap. Our results show that 84% confidence intervals robustly mimic 0.05 statistical tests for asymmetric confidence intervals, as has been demonstrated for symmetric ones in the past. Finally, we provide detailed user-guides for calculating 84% confidence intervals in two of the most robust and highly-used freeware related to diversity measurements for wildlife (i.e., EstimateS, Distance).

  2. Contrasting Diversity Values: Statistical Inferences Based on Overlapping Confidence Intervals

    PubMed Central

    MacGregor-Fors, Ian; Payton, Mark E.

    2013-01-01

    Ecologists often contrast diversity (species richness and abundances) using tests for comparing means or indices. However, many popular software applications do not support performing standard inferential statistics for estimates of species richness and/or density. In this study we simulated the behavior of asymmetric log-normal confidence intervals and determined an interval level that mimics statistical tests with P(α) = 0.05 when confidence intervals from two distributions do not overlap. Our results show that 84% confidence intervals robustly mimic 0.05 statistical tests for asymmetric confidence intervals, as has been demonstrated for symmetric ones in the past. Finally, we provide detailed user-guides for calculating 84% confidence intervals in two of the most robust and highly-used freeware related to diversity measurements for wildlife (i.e., EstimateS, Distance). PMID:23437239

  3. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  4. A Note on Confidence Interval Estimation and Margin of Error

    ERIC Educational Resources Information Center

    Gilliland, Dennis; Melfi, Vince

    2010-01-01

    Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

  5. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  6. Sample Size for the "Z" Test and Its Confidence Interval

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2012-01-01

    The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)

  7. Exact Confidence Intervals in the Presence of Interference

    PubMed Central

    Rigdon, Joseph; Hudgens, Michael G.

    2015-01-01

    For two-stage randomized experiments assuming partial interference, exact confidence intervals are proposed for treatment effects on a binary outcome. Empirical studies demonstrate the new intervals have narrower width than previously proposed exact intervals based on the Hoeffding inequality. PMID:26190877

  8. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  9. Estimation of confidence intervals for federal waterfowl harvest surveys

    USGS Publications Warehouse

    Geissler, P.H.

    1990-01-01

    I developed methods of estimating confidence intervals for the federal waterfowl harvest surveys conducted by the U.S. Fish and Wildlife Service (USFWS). I estimated flyway harvest confidence intervals for mallards (Anas platyrhynchos) (95% CI are .+-. 8% of the estimate). Canada geese (Branta canadensis) (.+-. 11%), black ducks (Anas rubripes) (.+-. 16%), canvasbacks (Aythya valisineria) (.+-. 32%), snow geese (Chen caerulescens) (.+-. 43%), and brant (Branta bernicla) (.+-. 46%). Differences between annual estimate of 10, 13, 22, 42, 43, and 58% could be detected with mallards, Canada geese, black ducks, canvasbacks, snow geese, and brant, respectively. Estimated confidence intervals for state harvests tended to be much larger than those for the flyway estimates.

  10. Inference by Eye: Pictures of Confidence Intervals and Thinking about Levels of Confidence

    ERIC Educational Resources Information Center

    Cumming, Geoff

    2007-01-01

    A picture of a 95% confidence interval (CI) implicitly contains pictures of CIs of all other levels of confidence, and information about the "p"-value for testing a null hypothesis. This article discusses pictures, taken from interactive software, that suggest several ways to think about the level of confidence of a CI, "p"-values, and what…

  11. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  12. Confidence intervals for similarity values determined for clonedSSU rRNA genes from environmental samples

    SciTech Connect

    Fields, M.W.; Schryver, J.C.; Brandt, C.C.; Yan, T.; Zhou, J.Z.; Palumbo, A.V.

    2007-04-02

    The goal of this research was to investigate the influenceof the error rate of sequence determination on the differentiation ofcloned SSU rRNA gene sequences for assessment of community structure. SSUrRNA cloned sequences from groundwater samples that represent differentbacterial divisions were sequenced multiple times with the samesequencing primer. From comparison of sequence alignments with unediteddata, confidence intervals were obtained from both a adouble binomial Tmodel of sequence comparison and by non-parametric methods. The resultsindicated that similarity values below 0.9946 arelikely derived fromdissimilar sequences at a confidence level of 0.95, and not sequencingerrors. The results confirmed that screening by direct sequencedetermination could be reliably used to differentiate at the specieslevel. However, given sequencing errors comparable to those seen in thisstudy, sequences with similarities above 0.9946 should be treated as thesame sequence if a 95 percent confidence is desired.

  13. Fast and Accurate Construction of Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  14. Confidence Intervals for Gamma-Family Measures of Ordinal Association

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2007-01-01

    This research focused on confidence intervals (CIs) for 10 measures of monotonic association between ordinal variables. Standard errors (SEs) were also reviewed because more than 1 formula was available per index. For 5 indices, an element of the formula used to compute an SE is given that is apparently new. CIs computed with different SEs were…

  15. Researchers Misunderstand Confidence Intervals and Standard Error Bars

    ERIC Educational Resources Information Center

    Belia, Sarah; Fidler, Fiona; Williams, Jennifer; Cumming, Geoff

    2005-01-01

    Little is known about researchers' understanding of confidence intervals (CIs) and standard error (SE) bars. Authors of journal articles in psychology, behavioral neuroscience, and medicine were invited to visit a Web site where they adjusted a figure until they judged 2 means, with error bars, to be just statistically significantly different (p…

  16. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  17. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  18. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population effect size…

  19. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  20. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  1. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input

  2. An Empirical Method for Establishing Positional Confidence Intervals Tailored for Composite Interval Mapping of QTL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL (quantitative trait loci) realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs). Support interval (SI) meth...

  3. Flood frequency analysis: Confidence interval estimation by test inversion bootstrapping

    NASA Astrophysics Data System (ADS)

    Schendel, Thomas; Thongwichian, Rossukon

    2015-09-01

    A common approach to estimate extreme flood events is the annual block maxima approach, where for each year the peak streamflow is determined and a distribution (usually the generalized extreme value distribution (GEV)) is fitted to this series of maxima. Eventually this distribution is used to estimate the return level for a defined return period. However, due to the finite sample size, the estimated return levels are associated with a range of uncertainity, usually expressed via confidence intervals. Previous publications have shown that existing bootstrapping methods for estimating the confidence intervals of the GEV yield too narrow estimates of these uncertainty ranges. Therefore, we present in this article a novel approach based on the less known test inversion bootstrapping, which we adapted especially for complex quantities like the return level. The reliability of this approach is studied and its performance is compared to other bootstrapping methods as well as the Profile Likelihood technique. It is shown that the new approach improves significantly the coverage of confidence intervals compared to other bootstrapping methods and for small sample sizes should even be favoured over the Profile Likelihood.

  4. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  5. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  6. On Efficient Confidence Intervals for the Log-Normal Mean

    NASA Astrophysics Data System (ADS)

    Chami, Peter; Antoine, Robin; Sahai, Ashok

    Data obtained in biomedical research is often skewed. Examples include the incubation period of diseases like HIV/AIDS and the survival times of cancer patients. Such data, especially when they are positive and skewed, is often modeled by the log-normal distribution. If this model holds, then the log transformation produces a normal distribution. We consider the problem of constructing confidence intervals for the mean of the log-normal distribution. Several methods for doing this are known, including at least one estimator that performed better than Coxxs method for small sample sizes. We also construct a modified version of Coxxs method. Using simulation, we show that, when the sample size exceeds 30, it leads to confidence intervals that have good overall properties and are better than Coxxs method. More precisely, the actual coverage probability of our method is closer to the nominal coverage probability than is the case with Coxxs method. In addition, the new method is computationally much simpler than other well-known methods.

  7. Comparing Simultaneous and Pointwise Confidence Intervals for Hydrological Processes.

    PubMed

    Francisco-Fernández, Mario; Quintela-del-Río, Alejandro

    2016-01-01

    Distribution function estimation of the random variable of river flow is an important problem in hydrology. This issue is directly related to quantile estimation, and consequently to return level prediction. The estimation process can be complemented with the construction of confidence intervals (CIs) to perform a probabilistic assessment of the different variables and/or estimated functions. In this work, several methods for constructing CIs using bootstrap techniques, and parametric and nonparametric procedures in the estimation process are studied and compared. In the case that the target is the joint estimation of a vector of values, some new corrections to obtain joint coverage probabilities closer to the corresponding nominal values are also presented. A comprehensive simulation study compares the different approaches, and the application of the different procedures to real data sets from four rivers in the United States and one in Spain complete the paper.

  8. Comparing Simultaneous and Pointwise Confidence Intervals for Hydrological Processes

    PubMed Central

    2016-01-01

    Distribution function estimation of the random variable of river flow is an important problem in hydrology. This issue is directly related to quantile estimation, and consequently to return level prediction. The estimation process can be complemented with the construction of confidence intervals (CIs) to perform a probabilistic assessment of the different variables and/or estimated functions. In this work, several methods for constructing CIs using bootstrap techniques, and parametric and nonparametric procedures in the estimation process are studied and compared. In the case that the target is the joint estimation of a vector of values, some new corrections to obtain joint coverage probabilities closer to the corresponding nominal values are also presented. A comprehensive simulation study compares the different approaches, and the application of the different procedures to real data sets from four rivers in the United States and one in Spain complete the paper. PMID:26828651

  9. Concept of a (1-. cap alpha. ) performance confidence interval

    SciTech Connect

    Leong, H.H.; Johnson, G.R.; Bechtel, T.N.

    1980-01-01

    A multi-input, single-output system is assumed to be represented by some model. The distribution functions of the input and the output variables are considered to be at least obtainable through experimental data. Associated with the computer response of the model corresponding to given inputs, a conditional pseudoresponse set is generated. This response can be constructed by means of the model by using the simulated pseudorandom input variates from a neighborhood defined by a preassigned probability allowance. A pair of such pseudoresponse values can then be computed by a procedure corresponding to a (1-..cap alpha..) probability for the conditional pseudoresponse set. The range defined by such a pair is called a (1-..cap alpha..) performance confidence interval with respect to the model. The application of this concept can allow comparison of the merit of two models describing the same system, or it can detect a system change when the current response is out of the performance interval with respect to the previously identified model. 6 figures.

  10. A comparison of several methods for the confidence intervals of negative binomial proportions

    NASA Astrophysics Data System (ADS)

    Thong, Alfred Lim Sheng; Shan, Fam Pei

    2015-12-01

    This study focuses on the comparison of the performances of several approaches in constructing confidence interval of negative binomial proportions (single negative binomial proportion and the difference between two negative binomial proportions). After that, the strengths and weaknesses of the approaches in constructing confidence interval of negative binomial proportions are figured out. Performances of the approaches will be accessed by comparing their coverage probabilities and average lengths of confidence intervals. For the comparison of the performances of the approaches in single negative binomial proportion, Wald confidence interval (WCI-I), Agresti confidence interval (ACI-I), Wilson's Score confidence interval (WSCI-I) and Jeffrey confidence interval (JCI-I) are used. WSCI-I is the better approach for single negative binomial proportion in term of the average length of confidence intervals and average coverage probability. While for the comparison of the performances of the approaches in the difference between two negative binomial proportions, Wald confidence interval (WCI-II), Agresti confidence interval (ACI-II), Newcombe's Score confidence interval (NSCI-II), Jeffrey confidence interval (JCI-II) and Yule confidence interval (YCI-II) are used. Under different situations, a better approach has been discussed and recommended. There will be different approach that performs better for the coverage probability.

  11. Inference by eye: reading the overlap of independent confidence intervals.

    PubMed

    Cumming, Geoff

    2009-01-30

    When 95 per cent confidence intervals (CIs) on independent means do not overlap, the two-tailed p-value is less than 0.05 and there is a statistically significant difference between the means. However, p for non-overlapping 95 per cent CIs is actually considerably smaller than 0.05: If the two CIs just touch, p is about 0.01, and the intervals can overlap by as much as about half the length of one CI arm before p becomes as large as 0.05. Keeping in mind this rule-that overlap of half the length of one arm corresponds approximately to statistical significance at p = 0.05-can be helpful for a quick appreciation of figures that display CIs, especially if precise p-values are not reported. The author investigated the robustness of this and similar rules, and found them sufficiently accurate when sample sizes are at least 10, and the two intervals do not differ in width by more than a factor of 2. The author reviewed previous discussions of CI overlap and extended the investigation to p-values other than 0.05 and 0.01. He also studied 95 per cent CIs on two proportions, and on two Pearson correlations, and found similar rules apply to overlap of these asymmetric CIs, for a very broad range of cases. Wider use of figures with 95 per cent CIs is desirable, and these rules may assist easy and appropriate understanding of such figures.

  12. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  13. Behavior Detection using Confidence Intervals of Hidden Markov Models

    SciTech Connect

    Griffin, Christopher H

    2009-01-01

    Markov models are commonly used to analyze real-world problems. Their combination of discrete states and stochastic transitions is suited to applications with deterministic and stochastic components. Hidden Markov Models (HMMs) are a class of Markov model commonly used in pattern recognition. Currently, HMMs recognize patterns using a maximum likelihood approach. One major drawback with this approach is that data observations are mapped to HMMs without considering the number of data samples available. Another problem is that this approach is only useful for choosing between HMMs. It does not provide a criteria for determining whether or not a given HMM adequately matches the data stream. In this work, we recognize complex behaviors using HMMs and confidence intervals. The certainty of a data match increases with the number of data samples considered. Receiver Operating Characteristic curves are used to find the optimal threshold for either accepting or rejecting a HMM description. We present one example using a family of HMM's to show the utility of the proposed approach. A second example using models extracted from a database of consumer purchases provides additional evidence that this approach can perform better than existing techniques.

  14. Exact and Best Confidence Intervals for the Ability Parameter of the Rasch Model.

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph

    1991-01-01

    Smallest exact confidence intervals for the ability parameter of the Rasch model are derived and compared to the traditional asymptotically valid intervals based on Fisher information. Tables of exact confidence intervals, termed Clopper-Pearson intervals, can be drawn up with a computer program developed by K. Klauer. (SLD)

  15. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual" (2001)…

  16. Using Confidence Intervals and Recurrence Intervals to Determine Precipitation Delivery Mechanisms Responsible for Mass Wasting Events.

    NASA Astrophysics Data System (ADS)

    Ulizio, T. P.; Bilbrey, C.; Stoyanoff, N.; Dixon, J. L.

    2015-12-01

    Mass wasting events are geologic hazards that impact human life and property across a variety of landscapes. These movements can be triggered by tectonic activity, anomalous precipitation events, or both; acting to decrease the factor of safety ratio on a hillslope to the point of failure. There exists an active hazard landscape in the West Boulder River drainage of Park Co., MT in which the mechanisms of slope failure are unknown. It is known that region has not seen significant tectonic activity within the last decade, leaving anomalous precipitation events as the likely trigger for slope failures in the landscape. Precipitation can be delivered to a landscape via rainfall or snow; it was the aim of this study to determine the precipitation delivery mechanism most likely responsible for movements in the West Boulder drainage following the Jungle Wildfire of 2006. Data was compiled from four SNOTEL sites in the surrounding area, spanning 33 years, focusing on, but not limited to; maximum snow water equivalent (SWE) values in a water year, median SWE values on the date which maximum SWE was recorded in a water year, the total precipitation accumulated in a water year, etc. Means were computed and 99% confidence intervals were constructed around these means. Recurrence intervals and exceedance probabilities were computed for maximum SWE values and total precipitation accumulated in a water year to determine water years with anomalous precipitation. It was determined that the water year 2010-2011 received an anomalously high amount of SWE, and snow melt in the spring of this water year likely triggered recent mass waste movements. This data is further supported by Google Earth imagery, showing movements between 2009 and 2011. Return intervals for the maximum SWE value in 2010-11 for the Placer Basin SNOTEL site was 34 years, while return intervals for the Box Canyon and Monument Peak SNOTEL sites were 17.5 and 17 years respectively. Max SWE values lie outside the

  17. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1996-01-01

    Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  18. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

  19. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  20. What Confidence Intervals "Really" Do and Why They Are So Important for Middle Grades Educational Research

    ERIC Educational Resources Information Center

    Skidmore, Susan Troncoso

    2009-01-01

    Recommendations made by major educational and psychological organizations (American Educational Research Association, 2006; American Psychological Association, 2001) call for researchers to regularly report confidence intervals. The purpose of the present paper is to provide support for the use of confidence intervals. To contextualize this…

  1. Computation of Confidence Intervals for Growth Performance in Determination of Safe Harbor Eligibility

    ERIC Educational Resources Information Center

    Mulvenon, Sean W.; Stegman, Charles E.

    2006-01-01

    As part of No Child Left Behind (NCLB) legislation, many states are using confidence intervals to determine a range of scores for evaluating a school system. More specifically, the states are employing confidence intervals to help minimize measurement error in determining a school system's performance. The methodology and techniques employed in…

  2. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2012-01-01

    Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

  3. "Confidence Intervals for Gamma-family Measures of Ordinal Association": Correction

    ERIC Educational Resources Information Center

    Psychological Methods, 2008

    2008-01-01

    Reports an error in "Confidence intervals for gamma-family measures of ordinal association" by Carol M. Woods (Psychological Methods, 2007[Jun], Vol 12[2], 185-204). The note corrects simulation results presented in the article concerning the performance of confidence intervals (CIs) for Spearman's r-sub(s). An error in the author's C++ code…

  4. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students'…

  5. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    ERIC Educational Resources Information Center

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10, the results…

  6. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  7. Multiplicative scale uncertainties in the unified approach for constructing confidence intervals

    SciTech Connect

    Smith, Elton

    2009-01-01

    We have investigated how uncertainties in the estimation of the detection efficiency affect the 90\\% confidence intervals in the unified approach for constructing confidence intervals. The study has been conducted for experiments where the number of detected events is large and can be described by a Gaussian probability density function. We also assume the detection efficiency has a Gaussian probability density and study the range of the relative uncertainties $\\sigma_\\epsilon$ between 0 and 30\\%. We find that the confidence intervals provide proper coverage and increase smoothly and continuously from the intervals that ignore scale uncertainties with a quadratic dependence on $\\sigma_\\epsilon$.

  8. Neutron multiplicity counting: Confidence intervals for reconstruction parameters

    DOE PAGES

    Verbeke, Jerome M.

    2016-03-09

    From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less

  9. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  10. Improved confidence intervals for the linkage disequilibrium method for estimating effective population size.

    PubMed

    Jones, A T; Ovenden, J R; Wang, Y-G

    2016-10-01

    The linkage disequilibrium method is currently the most widely used single sample estimator of genetic effective population size. The commonly used software packages come with two options, referred to as the parametric and jackknife methods, for computing the associated confidence intervals. However, little is known on the coverage performance of these methods, and the published data suggest there may be some room for improvement. Here, we propose two new methods for generating confidence intervals and compare them with the two in current use through a simulation study. The new confidence interval methods tend to be conservative but outperform the existing methods for generating confidence intervals under certain circumstances, such as those that may be encountered when making estimates using large numbers of single-nucleotide polymorphisms.

  11. Improved confidence intervals for the linkage disequilibrium method for estimating effective population size.

    PubMed

    Jones, A T; Ovenden, J R; Wang, Y-G

    2016-10-01

    The linkage disequilibrium method is currently the most widely used single sample estimator of genetic effective population size. The commonly used software packages come with two options, referred to as the parametric and jackknife methods, for computing the associated confidence intervals. However, little is known on the coverage performance of these methods, and the published data suggest there may be some room for improvement. Here, we propose two new methods for generating confidence intervals and compare them with the two in current use through a simulation study. The new confidence interval methods tend to be conservative but outperform the existing methods for generating confidence intervals under certain circumstances, such as those that may be encountered when making estimates using large numbers of single-nucleotide polymorphisms. PMID:27005004

  12. Confidence Intervals for True Scores under an Answer-until-Correct Scoring Procedure.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1987-01-01

    Four procedures are discussed for obtaining a confidence interval when answer-until-correct scoring is used in multiple choice tests. Simulated data show that the choice of procedure depends upon sample size. (GDC)

  13. Approximate Confidence Interval for Difference of Fit in Structural Equation Models.

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2001-01-01

    Discusses a method, based on bootstrap methodology, for obtaining an approximate confidence interval for the difference in root mean square error of approximation of two structural equation models. Illustrates the method using a numerical example. (SLD)

  14. Bayesian methods of confidence interval construction for the population attributable risk from cross-sectional studies.

    PubMed

    Pirikahu, Sarah; Jones, Geoffrey; Hazelton, Martin L; Heuer, Cord

    2016-08-15

    Population attributable risk measures the public health impact of the removal of a risk factor. To apply this concept to epidemiological data, the calculation of a confidence interval to quantify the uncertainty in the estimate is desirable. However, because perhaps of the confusion surrounding the attributable risk measures, there is no standard confidence interval or variance formula given in the literature. In this paper, we implement a fully Bayesian approach to confidence interval construction of the population attributable risk for cross-sectional studies. We show that, in comparison with a number of standard Frequentist methods for constructing confidence intervals (i.e. delta, jackknife and bootstrap methods), the Bayesian approach is superior in terms of percent coverage in all except a few cases. This paper also explores the effect of the chosen prior on the coverage and provides alternatives for particular situations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26799685

  15. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  16. Bias-corrected confidence intervals for the concentration parameter in a dilution assay.

    PubMed

    Wang, J; Basu, S

    1999-03-01

    Interval estimates of the concentration of target entities from a serial dilution assay are usually based on the maximum likelihood estimator. The distribution of the maximum likelihood estimator is skewed to the right and is positively biased. This bias results in interval estimates that either provide inadequate coverage relative to the nominal level or yield excessively long intervals. Confidence intervals based on both log transformation and bias reduction are proposed and are shown through simulations to provide appropriate coverage with shorter widths than the commonly used intervals in a variety of designs. An application to feline AIDS research, which motivated this work, is also presented.

  17. Confidence intervals for a random-effects meta-analysis based on Bartlett-type corrections.

    PubMed

    Noma, Hisashi

    2011-12-10

    In medical meta-analysis, the DerSimonian-Laird confidence interval for the average treatment effect has been widely adopted in practice. However, it is well known that its coverage probability (the probability that the interval actually includes the true value) can be substantially below the target level. One particular reason is that the validity of the confidence interval depends on the assumption that the number of synthesized studies is sufficiently large. In typical medical meta-analyses, the number of studies is fewer than 20. In this article, we developed three confidence intervals for improving coverage properties, based on (i) the Bartlett corrected likelihood ratio statistic, (ii) the efficient score statistic, and (iii) the Bartlett-type adjusted efficient score statistic. The Bartlett and Bartlett-type corrections improve the large sample approximations for the likelihood ratio and efficient score statistics. Through numerical evaluations by simulations, these confidence intervals demonstrated better coverage properties than the existing methods. In particular, with a moderate number of synthesized studies, the Bartlett and Bartlett-type corrected confidence intervals performed well. An application to a meta-analysis of the treatment for myocardial infarction with intravenous magnesium is presented.

  18. Confidence Intervals For Maximized Alpha Coefficients: An Evaluation of Joe and Woodward's Procedures and an Alternative Method.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1980-01-01

    The procedures yielding confidence intervals for maximized alpha coefficients of Joe and Woodward are reviewed. Confidence interval procedures of Whalen and Masson are next reviewed. Results are then presented of a Monte Carlo investigation of the procedures. (Author/CTM)

  19. Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys

    2016-04-01

    The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.

  20. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  1. Confidence intervals for the selected population in randomized trials that adapt the population enrolled

    PubMed Central

    Rosenblum, Michael

    2014-01-01

    It is a challenge to design randomized trials when it is suspected that a treatment may benefit only certain subsets of the target population. In such situations, trial designs have been proposed that modify the population enrolled based on an interim analysis, in a preplanned manner. For example, if there is early evidence during the trial that the treatment only benefits a certain subset of the population, enrollment may then be restricted to this subset. At the end of such a trial, it is desirable to draw inferences about the selected population. We focus on constructing confidence intervals for the average treatment effect in the selected population. Confidence interval methods that fail to account for the adaptive nature of the design may fail to have the desired coverage probability. We provide a new procedure for constructing confidence intervals having at least 95% coverage probability, uniformly over a large class Q of possible data generating distributions. Our method involves computing the minimum factor c by which a standard confidence interval must be expanded in order to have, asymptotically, at least 95% coverage probability, uniformly over Q. Computing the expansion factor c is not trivial, since it is not a priori clear, for a given decision rule, which data generating distribution leads to the worst-case coverage probability. We give an algorithm that computes c, and prove an optimality property for the resulting confidence interval procedure. PMID:23553577

  2. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  3. An Investigation of Quantile Function Estimators Relative to Quantile Confidence Interval Coverage

    PubMed Central

    Wei, Lai; Wang, Dongliang; Hutson, Alan D.

    2016-01-01

    In this article, we investigate the limitations of traditional quantile function estimators and introduce a new class of quantile function estimators, namely, the semi-parametric tail-extrapolated quantile estimators, which has excellent performance for estimating the extreme tails with finite sample sizes. The smoothed bootstrap and direct density estimation via the characteristic function methods are developed for the estimation of confidence intervals. Through a comprehensive simulation study to compare the confidence interval estimations of various quantile estimators, we discuss the preferred quantile estimator in conjunction with the confidence interval estimation method to use under different circumstances. Data examples are given to illustrate the superiority of the semi-parametric tail-extrapolated quantile estimators. The new class of quantile estimators is obtained by slight modification of traditional quantile estimators, and therefore, should be specifically appealing to researchers in estimating the extreme tails. PMID:26924881

  4. Effective confidence interval estimation of fault-detection process of software reliability growth models

    NASA Astrophysics Data System (ADS)

    Fang, Chih-Chiang; Yeh, Chun-Wu

    2016-09-01

    The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.

  5. Bootstrap standard error and confidence intervals for the correlations corrected for indirect range restriction.

    PubMed

    Li, Johnson Ching-Hong; Chan, Wai; Cui, Ying

    2011-11-01

    The standard Pearson correlation coefficient, r, is a biased estimator of the population correlation coefficient, ρ(XY) , when predictor X and criterion Y are indirectly range-restricted by a third variable Z (or S). Two correction algorithms, Thorndike's (1949) Case III, and Schmidt, Oh, and Le's (2006) Case IV, have been proposed to correct for the bias. However, to our knowledge, the two algorithms did not provide a procedure to estimate the associated standard error and confidence intervals. This paper suggests using the bootstrap procedure as an alternative. Two Monte Carlo simulations were conducted to systematically evaluate the empirical performance of the proposed bootstrap procedure. The results indicated that the bootstrap standard error and confidence intervals were generally accurate across simulation conditions (e.g., selection ratio, sample size). The proposed bootstrap procedure can provide a useful alternative for the estimation of the standard error and confidence intervals for the correlation corrected for indirect range restriction.

  6. Estimation and interpretation of k{sub eff} confidence intervals in MCNP

    SciTech Connect

    Urbatsch, T.J.; Forster, R.A.; Prael, R.E.; Beckman, R.J.

    1995-11-01

    MCNP`s criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP`s three k{sub eff} estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k{sub eff} estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered.

  7. Characterizing the Mathematics Anxiety Literature Using Confidence Intervals as a Literature Review Mechanism

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce

    2010-01-01

    The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…

  8. Making Subjective Judgments in Quantitative Studies: The Importance of Using Effect Sizes and Confidence Intervals

    ERIC Educational Resources Information Center

    Callahan, Jamie L.; Reio, Thomas G., Jr.

    2006-01-01

    At least twenty-three journals in the social sciences purportedly require authors to report effect sizes and, to a much lesser extent, confidence intervals; yet these requirements are rarely clear in the information for contributors. This article reviews some of the literature criticizing the exclusive use of null hypothesis significance testing…

  9. Spacecraft utility and the development of confidence intervals for criticality of anomalies

    NASA Technical Reports Server (NTRS)

    Williams, R. E.

    1980-01-01

    The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.

  10. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  11. Sample Size Planning for the Standardized Mean Difference: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Rausch, Joseph R.

    2006-01-01

    Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no…

  12. The Naive Intuitive Statistician: A Naive Sampling Model of Intuitive Confidence Intervals

    ERIC Educational Resources Information Center

    Juslin, Peter; Winman, Anders; Hansson, Patrik

    2007-01-01

    The perspective of the naive intuitive statistician is outlined and applied to explain overconfidence when people produce intuitive confidence intervals and why this format leads to more overconfidence than other formally equivalent formats. The naive sampling model implies that people accurately describe the sample information they have but are…

  13. Applying a Score Confidence Interval to Aiken's Item Content-Relevance Index

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Giacobbi, Peter R., Jr

    2004-01-01

    Item content-relevance is an important consideration for researchers when developing scales used to measure psychological constructs. Aiken (1980) proposed a statistic, "V," that can be used to summarize item content-relevance ratings obtained from a panel of expert judges. This article proposes the application of the Score confidence interval to…

  14. A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha

    ERIC Educational Resources Information Center

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.

    2010-01-01

    The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…

  15. Multivariate Effect Size Estimation: Confidence Interval Construction via Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling method is outlined for constructing a confidence interval (CI) of a popular multivariate effect size measure. The procedure uses the conventional multivariate analysis of variance (MANOVA) setup and is applicable with large samples. The approach provides a population range of plausible values for the proportion of…

  16. Assessing Conformance with Benford's Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals.

    PubMed

    Lesperance, M; Reed, W J; Stephens, M A; Tsao, C; Wilton, B

    2016-01-01

    Benford's Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford's Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford's Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud(2), Pearson's chi-square statistic and 100(1 - α)% Goodman's simultaneous confidence intervals be computed when assessing conformance with Benford's Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size.

  17. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  18. Confidence Intervals: Evaluating and Facilitating Their Use in Health Education Research

    ERIC Educational Resources Information Center

    Zhang, Jing; Hanik, Bruce W.; Chaney, Beth H.

    2008-01-01

    Health education researchers have called for research articles in health education to adhere to the recommendations of American Psychological Association and the American Medical Association regarding the reporting and use of effect sizes and confidence intervals (CIs). This article expands on the recommendations by (a) providing an overview of…

  19. Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.

    2007-01-01

    The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…

  20. Confidence intervals for confirmatory adaptive two-stage designs with treatment selection.

    PubMed

    Bebu, Ionut; Dragalin, Vladimir; Luta, George

    2013-05-01

    The construction of adequate confidence intervals for adaptive two-stage designs remains an area of ongoing research. We propose a conditional likelihood-based approach to construct a Wald confidence interval and two confidence intervals based on inverting the likelihood ratio test, one of them using first-order inference methods and the second one using higher order inference methods. The coverage probabilities of these confidence intervals, and also the average bias and mean square error of the corresponding point estimates, compare favorably with other available techniques. A small simulation study is used to evaluate the performance of the new methods. We investigate other extensions of practical interest for normal endpoints and illustrate them using real data, including the selection of more than one treatment for the second stage, selection rules based on both efficacy and safety endpoints, and the inclusion of a control/placebo arm. The new method also allows adjustment for covariates, and has been extended to deal with binomial data and other distributions from the exponential family. Although conceptually simple, the new methods have a much wider scope than the methods currently available.

  1. Conceptual and Practical Implications for Rehabilitation Research: Effect Size Estimates, Confidence Intervals, and Power

    ERIC Educational Resources Information Center

    Ferrin, James M.; Bishop, Malachy; Tansey, Timothy N.; Frain, Michael; Swett, Elizabeth A.; Lane, Frank J.

    2007-01-01

    For a number of conceptually and practically important reasons, reporting of effect size estimates, confidence intervals, and power in parameter estimation is increasingly being recognized as the preferred approach in social science research. Unfortunately, this practice has not yet been widely adopted in the rehabilitation or general counseling…

  2. Comparison of Approaches to Constructing Confidence Intervals for Mediating Effects Using Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. L.

    2007-01-01

    Mediators are variables that explain the association between an independent variable and a dependent variable. Structural equation modeling (SEM) is widely used to test models with mediating effects. This article illustrates how to construct confidence intervals (CIs) of the mediating effects for a variety of models in SEM. Specifically, mediating…

  3. Confidence Intervals for an Effect Size Measure in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2007-01-01

    The increase in the squared multiple correlation coefficient ([Delta]R[squared]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. The coverage probability that an asymptotic and percentile bootstrap confidence interval includes [Delta][rho][squared] was investigated. As expected,…

  4. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  5. A Note on Confidence Intervals for Two-Group Latent Mean Effect Size Measures

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Fan, Weihua; Hancock, Gregory R.

    2009-01-01

    This note suggests delta method implementations for deriving confidence intervals for a latent mean effect size measure for the case of 2 independent populations. A hypothetical kindergarten reading example using these implementations is provided, as is supporting LISREL syntax. (Contains 1 table.)

  6. Note on a Confidence Interval for the Squared Semipartial Correlation Coefficient

    ERIC Educational Resources Information Center

    Algina, James; Keselman, Harvey J.; Penfield, Randall J.

    2008-01-01

    A squared semipartial correlation coefficient ([Delta]R[superscript 2]) is the increase in the squared multiple correlation coefficient that occurs when a predictor is added to a multiple regression model. Prior research has shown that coverage probability for a confidence interval constructed by using a modified percentile bootstrap method with…

  7. Approximate Confidence Intervals for Estimates of Redundancy between Sets of Variables.

    ERIC Educational Resources Information Center

    Lambert, Zarrel V.; And Others

    1989-01-01

    Bootstrap methodology is presented that yields approximations of the sampling variation of redundancy estimates while assuming little a priori knowledge about the distributions of these statistics. Results of numerical demonstrations suggest that bootstrap confidence intervals may offer substantial assistance in interpreting the results of…

  8. Assessing Conformance with Benford’s Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals

    PubMed Central

    Lesperance, M.; Reed, W. J.; Stephens, M. A.; Tsao, C.; Wilton, B.

    2016-01-01

    Benford’s Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford’s Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford’s Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud2, Pearson’s chi-square statistic and 100(1 − α)% Goodman’s simultaneous confidence intervals be computed when assessing conformance with Benford’s Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size. PMID:27018999

  9. Finite sample pointwise confidence intervals for a survival distribution with right-censored data.

    PubMed

    Fay, Michael P; Brittain, Erica H

    2016-07-20

    We review and develop pointwise confidence intervals for a survival distribution with right-censored data for small samples, assuming only independence of censoring and survival. When there is no censoring, at each fixed time point, the problem reduces to making inferences about a binomial parameter. In this case, the recently developed beta product confidence procedure (BPCP) gives the standard exact central binomial confidence intervals of Clopper and Pearson. Additionally, the BPCP has been shown to be exact (gives guaranteed coverage at the nominal level) for progressive type II censoring and has been shown by simulation to be exact for general independent right censoring. In this paper, we modify the BPCP to create a 'mid-p' version, which reduces to the mid-p confidence interval for a binomial parameter when there is no censoring. We perform extensive simulations on both the standard and mid-p BPCP using a method of moments implementation that enforces monotonicity over time. All simulated scenarios suggest that the standard BPCP is exact. The mid-p BPCP, like other mid-p confidence intervals, has simulated coverage closer to the nominal level but may not be exact for all survival times, especially in very low censoring scenarios. In contrast, the two asymptotically-based approximations have lower than nominal coverage in many scenarios. This poor coverage is due to the extreme inflation of the lower error rates, although the upper limits are very conservative. Both the standard and the mid-p BPCP methods are available in our bpcp R package. Published 2016. This article is US Government work and is in the public domain in the USA. PMID:26891706

  10. MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis.

    PubMed

    Campbell, Jamie I D; Thompson, Valerie A

    2012-12-01

    MorePower 6.0 is a flexible freeware statistical calculator that computes sample size, effect size, and power statistics for factorial ANOVA designs. It also calculates relational confidence intervals for ANOVA effects based on formulas from Jarmasz and Hollands (Canadian Journal of Experimental Psychology 63:124-138, 2009), as well as Bayesian posterior probabilities for the null and alternative hypotheses based on formulas in Masson (Behavior Research Methods 43:679-690, 2011). The program is unique in affording direct comparison of these three approaches to the interpretation of ANOVA tests. Its high numerical precision and ability to work with complex ANOVA designs could facilitate researchers' attention to issues of statistical power, Bayesian analysis, and the use of confidence intervals for data interpretation. MorePower 6.0 is available at https://wiki.usask.ca/pages/viewpageattachments.action?pageId=420413544 .

  11. The use of latin hypercube sampling for the efficient estimation of confidence intervals

    SciTech Connect

    Grabaskas, D.; Denning, R.; Aldemir, T.; Nakayama, M. K.

    2012-07-01

    Latin hypercube sampling (LHS) has long been used as a way of assuring adequate sampling of the tails of distributions in a Monte Carlo analysis and provided the framework for the uncertainty analysis performed in the NUREG-1150 risk assessment. However, this technique has not often been used in the performance of regulatory analyses due to the inability to establish confidence levels on the quantiles of the output distribution. Recent work has demonstrated a method that makes this possible. This method is compared to the procedure of crude Monte Carlo using order statistics, which is currently used to establish confidence levels. The results of several statistical examples demonstrate that the LHS confidence interval method can provide a more accurate and precise solution, but issues remain when applying the technique generally. (authors)

  12. On Statistical Methods for Common Mean and Reference Confidence Intervals in Interlaboratory Comparisons for Temperature

    NASA Astrophysics Data System (ADS)

    Witkovský, Viktor; Wimmer, Gejza; Ďuriš, Stanislav

    2015-08-01

    We consider a problem of constructing the exact and/or approximate coverage intervals for the common mean of several independent distributions. In a metrological context, this problem is closely related to evaluation of the interlaboratory comparison experiments, and in particular, to determination of the reference value (estimate) of a measurand and its uncertainty, or alternatively, to determination of the coverage interval for a measurand at a given level of confidence, based on such comparison data. We present a brief overview of some specific statistical models, methods, and algorithms useful for determination of the common mean and its uncertainty, or alternatively, the proper interval estimator. We illustrate their applicability by a simple simulation study and also by example of interlaboratory comparisons for temperature. In particular, we shall consider methods based on (i) the heteroscedastic common mean fixed effect model, assuming negligible laboratory biases, (ii) the heteroscedastic common mean random effects model with common (unknown) distribution of the laboratory biases, and (iii) the heteroscedastic common mean random effects model with possibly different (known) distributions of the laboratory biases. Finally, we consider a method, recently suggested by Singh et al., for determination of the interval estimator for a common mean based on combining information from independent sources through confidence distributions.

  13. Estimation of confidence intervals of global horizontal irradiance obtained from a weather prediction model

    NASA Astrophysics Data System (ADS)

    Ohtake, Hideaki; Gari da Silva Fonseca, Joao, Jr.; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

    2014-05-01

    Many photovoltaic (PV) systems have been installed in Japan after the introduction of the Feed-in-Tariff. For an energy management of electric power systems included many PV systems, the forecast of the PV power production are useful technology. Recently numerical weather predictions have been applied to forecast the PV power production while the forecasted values invariably have forecast errors for each modeling system. So, we must use the forecast data considering its error. In this study, we attempted to estimate confidence intervals for hourly forecasts of global horizontal irradiance (GHI) values obtained from a mesoscale model (MSM) de-veloped by the Japan Meteorological Agency. In the recent study, we found that the forecasted values of the GHI of the MSM have two systematical forecast errors; the first is that forecast values of the GHI are depended on the clearness indices, which are defined as the GHI values divided by the extraterrestrial solar irradiance. The second is that forecast errors have the seasonal variations; the overestimation of the GHI forecasts is found in winter while the underestimation of those is found in summer. The information of the errors of the hourly GHI forecasts, that is, confidence intervals of the forecasts, is of great significance for planning the energy management included a lot of PV systems by an electric company. On the PV systems, confidence intervals of the GHI forecasts are required for a pinpoint area or for a relatively large area control-ling the power system. For the relatively large area, a spatial-smoothing method of the GHI values is performed for both the observations and forecasts. The spatial-smoothing method caused the decline of confidence intervals of the hourly GHI forecasts on an extreme event of the GHI forecast (a case of large forecast error) over the relatively large area of the Tokyo electric company (approximately 68 % than for a pinpoint forecast). For more credible estimation of the confidence

  14. Approximate Confidence Intervals for Standardized Effect Sizes in the Two-Independent and Two-Dependent Samples Design

    ERIC Educational Resources Information Center

    Viechtbauer, Wolfgang

    2007-01-01

    Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…

  15. ScoreRel CI: An Excel Program for Computing Confidence Intervals for Commonly Used Score Reliability Coefficients

    ERIC Educational Resources Information Center

    Barnette, J. Jackson

    2005-01-01

    An Excel program developed to assist researchers in the determination and presentation of confidence intervals around commonly used score reliability coefficients is described. The software includes programs to determine confidence intervals for Cronbachs alpha, Pearson r-based coefficients such as those used in test-retest and alternate forms…

  16. Accuracy in Parameter Estimation for Targeted Effects in Structural Equation Modeling: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Lai, Keke; Kelley, Ken

    2011-01-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about…

  17. Students' Conceptual Metaphors Influence Their Statistical Reasoning about Confidence Intervals. WCER Working Paper No. 2008-5

    ERIC Educational Resources Information Center

    Grant, Timothy S.; Nathan, Mitchell J.

    2008-01-01

    Confidence intervals are beginning to play an increasing role in the reporting of research findings within the social and behavioral sciences and, consequently, are becoming more prevalent in beginning classes in statistics and research methods. Confidence intervals are an attractive means of conveying experimental results, as they contain a…

  18. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  19. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu, L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2006-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Ovenvrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter error are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  20. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2007-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  1. Receiver operating characteristic analysis for intelligent medical systems--a new approach for finding confidence intervals.

    PubMed

    Tilbury, J B; Van Eetvelt, P W; Garibaldi, J M; Curnow, J S; Ifeachor, E C

    2000-07-01

    Intelligent systems are increasingly being deployed in medicine and healthcare, but there is a need for a robust and objective methodology for evaluating such systems. Potentially, receiver operating characteristic (ROC) analysis could form a basis for the objective evaluation of intelligent medical systems. However, it has several weaknesses when applied to the types of data used to evaluate intelligent medical systems. First, small data sets are often used, which are unsatisfactory with existing methods. Second, many existing ROC methods use parametric assumptions which may not always be valid for the test cases selected. Third, system evaluations are often more concerned with particular, clinically meaningful, points on the curve, rather than on global indexes such as the more commonly used area under the curve. A novel, robust and accurate method is proposed, derived from first principles, which calculates the probability density function (pdf) for each point on a ROC curve for any given sample size. Confidence intervals are produced as contours on the pdf. The theoretical work has been validated by Monte Carlo simulations. It has also been applied to two real-world examples of ROC analysis, taken from the literature (classification of mammograms and differential diagnosis of pancreatic diseases), to investigate the confidence surfaces produced for real cases, and to illustrate how analysis of system performance can be enhanced. We illustrate the impact of sample size on system performance from analysis of ROC pdf's and 95% confidence boundaries. This work establishes an important new method for generating pdf's, and provides an accurate and robust method of producing confidence intervals for ROC curves for the small sample sizes typical of intelligent medical systems. It is conjectured that, potentially, the method could be extended to determine risks associated with the deployment of intelligent medical systems in clinical practice.

  2. Amplitude estimation of a sine function based on confidence intervals and Bayes' theorem

    NASA Astrophysics Data System (ADS)

    Eversmann, D.; Pretz, J.; Rosenthal, M.

    2016-05-01

    This paper discusses the amplitude estimation using data originating from a sine-like function as probability density function. If a simple least squares fit is used, a significant bias is observed if the amplitude is small compared to its error. It is shown that a proper treatment using the Feldman-Cousins algorithm of likelihood ratios allows one to construct improved confidence intervals. Using Bayes' theorem a probability density function is derived for the amplitude. It is used in an application to show that it leads to better estimates compared to a simple least squares fit.

  3. Assessment of individual agreements with repeated measurements based on generalized confidence intervals.

    PubMed

    Quiroz, Jorge; Burdick, Richard K

    2009-01-01

    Individual agreement between two measurement systems is determined using the total deviation index (TDI) or the coverage probability (CP) criteria as proposed by Lin (2000) and Lin et al. (2002). We used a variance component model as proposed by Choudhary (2007). Using the bootstrap approach, Choudhary (2007), and generalized confidence intervals, we construct bounds on TDI and CP. A simulation study was conducted to assess whether the bounds maintain the stated type I error probability of the test. We also present a computational example to demonstrate the statistical methods described in the paper.

  4. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman

  5. Fast time-series prediction using high-dimensional data: Evaluating confidence interval credibility

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito

    2014-05-01

    I propose an index for evaluating the credibility of confidence intervals for future observables predicted from high-dimensional time-series data. The index evaluates the distance from the current state to the data manifold. I demonstrate the index with artificial datasets generated from the Lorenz'96 II model [Lorenz, in Proceedings of the Seminar on Predictability, Vol. 1 (ECMWF, Reading, UK, 1996), p. 1], the Lorenz'96 I model [Hansen and Smith, J. Atmos. Sci. 57, 2859 (2000), 10.1175/1520-0469(2000)057<2859:TROOCI>2.0.CO;2], and the coupled map lattice, and a real dataset for the solar irradiation around Japan.

  6. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  7. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  8. Empirical likelihood-based confidence intervals for length-biased data

    PubMed Central

    Ning, J.; Qin, J.; Asgharian, M.; Shen, Y.

    2013-01-01

    Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross-sectional prevalent cohort study for which we recruit prevalent cases, i.e. subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow-up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so-called stationarity assumption, this bias is called length bias. The current literature on length-biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap using the empirical likelihood-based confidence intervals by adapting this method to right-censored length-biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging. PMID:23027662

  9. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.

    PubMed

    Greenland, Sander; Senn, Stephen J; Rothman, Kenneth J; Carlin, John B; Poole, Charles; Goodman, Steven N; Altman, Douglas G

    2016-04-01

    Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting. PMID:27209009

  10. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  11. A comparison of confidence interval methods for the intraclass correlation coefficient in cluster randomized trials.

    PubMed

    Ukoumunne, Obioha C

    2002-12-30

    This study compared different methods for assigning confidence intervals to the analysis of variance estimator of the intraclass correlation coefficient (rho). The context of the comparison was the use of rho to estimate the variance inflation factor when planning cluster randomized trials. The methods were compared using Monte Carlo simulations of unbalanced clustered data and data from a cluster randomized trial of an intervention to improve the management of asthma in a general practice setting. The coverage and precision of the intervals were compared for data with different numbers of clusters, mean numbers of subjects per cluster and underlying values of rho. The performance of the methods was also compared for data with Normal and non-Normally distributed cluster specific effects. Results of the simulations showed that methods based upon the variance ratio statistic provided greater coverage levels than those based upon large sample approximations to the standard error of rho. Searle's method provided close to nominal coverage for data with Normally distributed random effects. Adjusted versions of Searle's method to allow for lack of balance in the data generally did not improve upon it either in terms of coverage or precision. Analyses of the trial data, however, showed that limits provided by Thomas and Hultquist's method may differ from those of the other variance ratio statistic methods when the arithmetic mean differs markedly from the harmonic mean cluster size. The simulation results demonstrated that marked non-Normality in the cluster level random effects compromised the performance of all methods. Confidence intervals for the methods were generally wide relative to the underlying size of rho suggesting that there may be great uncertainty associated with sample size calculations for cluster trials where large clusters are randomized. Data from cluster based studies with sample sizes much larger than those typical of cluster randomized trials are

  12. Optimal Averaging of Seasonal Sea Surface Temperatures and Associated Confidence Intervals (1860-1989).

    NASA Astrophysics Data System (ADS)

    Smith, Thomas M.; Reynolds, Richard W.; Ropelewski, Chester F.

    1994-06-01

    Optimal averaging (OA) is used to compute the area-average seasonal sea surface temperature (SST) for a variety of areas from 1860 to 1989. The OA gives statistically improved averages and the objective assignment of confidence intervals to these averages. The ability to assign confidence intervals is the main advantage of this method. Confidence intervals reflect how densely and uniformly an area is sampled during the averaging season. For the global average, the early part of the record (1860-1890) and the times of the two world wars have largest uncertainties. Analysis of OA-based uncertainty estimates shows that before 1930 sampling in the Southern Hemisphere was as good as it was in the Northern Hemisphere. From about 1930 to 1950, uncertainties decreased in both hemispheres, but the magnitude of the Northern Hemisphere uncertainties reduced more and remained smaller. After the early 1950s uncertainties were relatively constant in both hemispheres, indicating that sampling was relatively consistent over the period. During the two world wars, increased uncertainties reflected the sampling decreases over all the oceans, with the biggest decreases south of 40°S. The OA global SST anomalies are virtually identical to estimates of global SST anomalies computed using simpler methods, when the same data corrections are applied. When data are plentiful over an area there is no clear advantage of the OA over simpler methods. The major advantage of the OA over the simpler methods is the accompanying error estimates.The OA analysis suggests that SST anomalies were not significantly different from 0 from 1860 to 1900. This result is heavily influenced by the choice of the data corrections applied before the 1950s. Global anomalies are also near zero from 1940 until the mid-1970s. The OA analysis suggests that negative anomalies dominated the period from the early 1900s through the 1930s although the uncertainties are quite large during and immediately following World War

  13. Safety evaluation and confidence intervals when the number of observed events is small or zero.

    PubMed

    Jovanovic, B D; Zalenski, R J

    1997-09-01

    A common objective in many clinical studies is to determine the safety of a diagnostic test or therapeutic intervention. In these evaluations, serious adverse effects are either rare or not encountered. In this setting, the estimation of the confidence interval (CI) for the unknown proportion of adverse events has special importance. When no adverse events are encountered, commonly used approximate methods for calculating CIs cannot be applied, and such information is not commonly reported. Furthermore, when only a few adverse events are encountered, the approximate methods for calculation of CIs can be applied, but are neither appropriate nor accurate. In both situations, CIs should be computed with the use of the exact binomial distribution. We discuss the need for such estimation and provide correct methods and rules of thumb for quick computations of accurate approximations of the 95% and 99.9% CIs when the observed number of adverse events is zero. PMID:9287891

  14. Analytic Monte Carlo score distributions for future statistical confidence interval studies

    SciTech Connect

    Booth, T.E. )

    1992-10-01

    The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large score sampling from the score distribution's tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. The analytic score distribution for geometry splitting/Russian roulette applied to a simple Monte Carlo problem and the analytic score distribution for the exponential transform applied to the same Monte Carlo problem are provided in this paper.

  15. Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.

    PubMed

    López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio

    2016-01-01

    Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550

  16. Statistical variability and confidence intervals for planar dose QA pass rates

    SciTech Connect

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B.

    2011-11-15

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  17. Confidence intervals for two sample means: Calculation, interpretation, and a few simple rules

    PubMed Central

    Pfister, Roland; Janczyk, Markus

    2013-01-01

    Valued by statisticians, enforced by editors, and confused by many authors, standard errors (SEs) and confidence intervals (CIs) remain a controversial issue in the psychological literature. This is especially true for the proper use of CIs for within-subjects designs, even though several recent publications elaborated on possible solutions for this case. The present paper presents a short and straightforward introduction to the basic principles of CI construction, in an attempt to encourage students and researchers in cognitive psychology to use CIs in their reports and presentations. Focusing on a simple but prevalent case of statistical inference, the comparison of two sample means, we describe possible CIs for between- and within-subjects designs. In addition, we give hands-on examples of how to compute these CIs and discuss their relation to classical t-tests. PMID:23826038

  18. BootES: an R package for bootstrap confidence intervals on effect sizes.

    PubMed

    Kirby, Kris N; Gerlanc, Daniel

    2013-12-01

    Bootstrap Effect Sizes (bootES; Gerlanc & Kirby, 2012) is a free, open-source software package for R (R Development Core Team, 2012), which is a language and environment for statistical computing. BootES computes both unstandardized and standardized effect sizes (such as Cohen's d, Hedges's g, and Pearson's r) and makes easily available for the first time the computation of their bootstrap confidence intervals (CIs). In this article, we illustrate how to use bootES to find effect sizes for contrasts in between-subjects, within-subjects, and mixed factorial designs and to find bootstrap CIs for correlations and differences between correlations. An appendix gives a brief introduction to R that will allow readers to use bootES without having prior knowledge of R.

  19. Comparing the toxicity of two drugs in the framework of spontaneous reporting: a confidence interval approach.

    PubMed

    Tubert-Bitter, P; Begaud, B; Moride, Y; Chaslerie, A; Haramburu, F

    1996-01-01

    Spontaneous reporting remains the most frequently used technique in post-marketing surveillance. Decision-making usually depends on comparisons between the number of adverse drug reactions (ADRs) reported for two drugs on the basis of an equivalent number of prescriptions. The validity of such comparisons is expected to be jeopardized by probable underreporting ADR cases. This problem is accentuated when it cannot be assumed that the magnitude of underreporting is the same for the both drugs. Differences in reporting ratios can overemphasize, cancel, or reverse the conclusions of a statistical comparison based on the number of reports. We propose a single method for (1) calculating confidence intervals for relative risks estimated in the context of spontaneous reporting and (2) deriving the range of reporting ratios for which the conclusion of the statistical comparison remains statistically valid. PMID:8598505

  20. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  1. Determination of confidence intervals in non-normal data: application of the bootstrap to cocaine concentration in femoral blood.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2015-03-01

    Calculating the confidence interval is a common procedure in data analysis and is readily obtained from normally distributed populations with the familiar [Formula: see text] formula. However, when working with non-normally distributed data, determining the confidence interval is not as obvious. For this type of data, there are fewer references in the literature, and they are much less accessible. We describe, in simple language, the percentile and bias-corrected and accelerated variations of the bootstrap method to calculate confidence intervals. This method can be applied to a wide variety of parameters (mean, median, slope of a calibration curve, etc.) and is appropriate for normal and non-normal data sets. As a worked example, the confidence interval around the median concentration of cocaine in femoral blood is calculated using bootstrap techniques. The median of the non-toxic concentrations was 46.7 ng/mL with a 95% confidence interval of 23.9-85.8 ng/mL in the non-normally distributed set of 45 postmortem cases. This method should be used to lead to more statistically sound and accurate confidence intervals for non-normally distributed populations, such as reference values of therapeutic and toxic drug concentration, as well as situations of truncated concentration values near the limit of quantification or cutoff of a method.

  2. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.; Klein, V.

    1984-01-01

    Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

  3. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals.

    PubMed

    Kelley, Ken; Lai, Keke

    2011-02-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively small range at the specified level of confidence. The accuracy in parameter estimation approach to sample size planning is developed for the RMSEA so that the confidence interval for the population RMSEA will have a width whose expectation is sufficiently narrow. Analytic developments are shown to work well with a Monte Carlo simulation study. Freely available computer software is developed so that the methods discussed can be implemented. The methods are demonstrated for a repeated measures design where the way in which social relationships and initial depression influence coping strategies and later depression are examined.

  4. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  5. Confidence intervals after multiple imputation: combining profile likelihood information from logistic regressions.

    PubMed

    Heinze, Georg; Ploner, Meinhard; Beyea, Jan

    2013-12-20

    In the logistic regression analysis of a small-sized, case-control study on Alzheimer's disease, some of the risk factors exhibited missing values, motivating the use of multiple imputation. Usually, Rubin's rules (RR) for combining point estimates and variances would then be used to estimate (symmetric) confidence intervals (CIs), on the assumption that the regression coefficients were distributed normally. Yet, rarely is this assumption tested, with or without transformation. In analyses of small, sparse, or nearly separated data sets, such symmetric CI may not be reliable. Thus, RR alternatives have been considered, for example, Bayesian sampling methods, but not yet those that combine profile likelihoods, particularly penalized profile likelihoods, which can remove first order biases and guarantee convergence of parameter estimation. To fill the gap, we consider the combination of penalized likelihood profiles (CLIP) by expressing them as posterior cumulative distribution functions (CDFs) obtained via a chi-squared approximation to the penalized likelihood ratio statistic. CDFs from multiple imputations can then easily be averaged into a combined CDF c , allowing confidence limits for a parameter β  at level 1 - α to be identified as those β* and β** that satisfy CDF c (β*) = α ∕ 2 and CDF c (β**) = 1 - α ∕ 2. We demonstrate that the CLIP method outperforms RR in analyzing both simulated data and data from our motivating example. CLIP can also be useful as a confirmatory tool, should it show that the simpler RR are adequate for extended analysis. We also compare the performance of CLIP to Bayesian sampling methods using Markov chain Monte Carlo. CLIP is available in the R package logistf. PMID:23873477

  6. Applications of asymptotic confidence intervals with continuity corrections for asymmetric comparisons in noninferiority trials.

    PubMed

    Soulakova, Julia N; Bright, Brianna C

    2013-01-01

    A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned.

  7. A comparison of methods for the construction of confidence interval for relative risk in stratified matched-pair designs.

    PubMed

    Tang, Nian-Sheng; Li, Hui-Qiong; Tang, Man-Lai

    2010-01-15

    A stratified matched-pair study is often designed for adjusting a confounding effect or effect of different trails/centers/ groups in modern medical studies. The relative risk is one of the most frequently used indices in comparing efficiency of two treatments in clinical trials. In this paper, we propose seven confidence interval estimators for the common relative risk and three simultaneous confidence interval estimators for the relative risks in stratified matched-pair designs. The performance of the proposed methods is evaluated with respect to their type I error rates, powers, coverage probabilities, and expected widths. Our empirical results show that the percentile bootstrap confidence interval and bootstrap-resampling-based Bonferroni simultaneous confidence interval behave satisfactorily for small to large sample sizes in the sense that (i) their empirical coverage probabilities can be well controlled around the pre-specified nominal confidence level with reasonably shorter confidence widths; and (ii) the empirical type I error rates of their associated test statistics are generally closer to the pre-specified nominal level with larger powers. They are hence recommended. Two real examples from clinical laboratory studies are used to illustrate the proposed methodologies.

  8. Confidence Intervals Permit, but Do Not Guarantee, Better Inference than Statistical Significance Testing

    PubMed Central

    Coulson, Melissa; Healey, Michelle; Fidler, Fiona; Cumming, Geoff

    2010-01-01

    A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST), or confidence intervals (CIs). Authors of articles published in psychology, behavioral neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST. PMID:21607077

  9. Confidence interval procedures for system reliability and applications to competing risks models.

    PubMed

    Hong, Yili; Meeker, William Q

    2014-04-01

    System reliability depends on the reliability of the system's components and the structure of the system. For example, in a competing risks model, the system fails when the weakest component fails. The reliability function and the quantile function of a complicated system are two important metrics for characterizing the system's reliability. When there are data available at the component level, the system reliability can be estimated by using the component level information. Confidence intervals (CIs) are needed to quantify the statistical uncertainty in the estimation. Obtaining system reliability CI procedures with good properties is not straightforward, especially when the system structure is complicated. In this paper, we develop a general procedure for constructing a CI for the system failure-time quantile function by using the implicit delta method. We also develop general procedures for constructing a CI for the cumulative distribution function (cdf) of the system. We show that the recommended procedures are asymptotically valid and have good statistical properties. We conduct simulations to study the finite-sample coverage properties of the proposed procedures and compare them with existing procedures. We apply the proposed procedures to three applications; two applications in competing risks models and an application with a k-out-of-s system. The paper concludes with some discussion and an outline of areas for future research.

  10. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  11. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  12. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    ERIC Educational Resources Information Center

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  13. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  14. Sample Size Planning for the Squared Multiple Correlation Coefficient: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken

    2008-01-01

    Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…

  15. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  16. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  17. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  18. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    PubMed

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management. PMID:24465792

  19. Proposal and validation of a method to construct confidence intervals for clinical outcomes around FROC curves for mammography CAD systems

    NASA Astrophysics Data System (ADS)

    Bornefalk, Hans

    2005-04-01

    This paper introduces a method for constructing confidence intervals for possible clinical outcomes around the FROC curve of a mammography CAD system. Given the architecture of a CAD classifying machine, there is one and only one system threshold that will yield a desired sensitivity on a certain population. The limited training sample size leads to a sampling error and an uncertainty in determining the optimal system threshold. This leads to an uncertainty in the operating point in the direction along the FROC curve which can be captured by a Bayesian approach where the distribution of possible thresholds is estimated. This uncertainty contributes to a large and spread-out confidence interval which is important to consider when one is intending to make comparisons between CAD algorithms trained on different data sets. The method is validated using a Monte Carlo method designed to capture the effect of correctly determining the system threshold.

  20. Approximate confidence intervals for moment-based estimators of the between-study variance in random effects meta-analysis.

    PubMed

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-12-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment effects follow a normal distribution. Recently proposed moment-based confidence intervals for the between-study variance are exact under the random effects model but are quite elaborate. Here, we present a much simpler method for calculating approximate confidence intervals of this type. This method uses variance-stabilising transformations as its basis and can be used for a very wide variety of moment-based estimators in both the random effects meta-analysis and meta-regression models.

  1. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  2. Curriculum-based measurement of oral reading: A preliminary investigation of confidence interval overlap to detect reliable growth.

    PubMed

    Van Norman, Ethan R

    2016-09-01

    Curriculum-based measurement of oral reading (CBM-R) progress monitoring data is used to measure student response to instruction. Federal legislation permits educators to use CBM-R progress monitoring data as a basis for determining the presence of specific learning disabilities. However, decision making frameworks originally developed for CBM-R progress monitoring data were not intended for such high stakes assessments. Numerous documented issues with trend line estimation undermine the validity of using slope estimates to infer progress. One proposed recommendation is to use confidence interval overlap as a means of judging reliable growth. This project explored the degree to which confidence interval overlap was related to true growth magnitude using simulation methodology. True and observed CBM-R scores were generated across 7 durations of data collection (range 6-18 weeks), 3 levels of dataset quality or residual variance (5, 10, and 15 words read correct per minute) and 2 types of data collection schedules. Descriptive and inferential analyses were conducted to explore interactions between overlap status, progress monitoring scenarios, and true growth magnitude. A small but statistically significant interaction was observed between overlap status, duration, and dataset quality, b = -0.004, t(20992) =-7.96, p < .001. In general, confidence interval overlap does not appear to meaningfully account for variance in true growth across many progress monitoring conditions. Implications for research and practice are discussed. Limitations and directions for future research are addressed. (PsycINFO Database Record

  3. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve.

  4. Confidence Intervals for Squared Semipartial Correlation Coefficients: The Effect of Nonnormality

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2010-01-01

    The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…

  5. A program for confidence interval calculations for a Poisson process with background including systematic uncertainties: POLE 1.0

    NASA Astrophysics Data System (ADS)

    Conrad, Jan

    2004-04-01

    A Fortran 77 routine has been developed to calculate confidence intervals with and without systematic uncertainties using a frequentist confidence interval construction with a Bayesian treatment of the systematic uncertainties. The routine can account for systematic uncertainties in the background prediction and signal/background efficiencies. The uncertainties may be separately parametrized by a Gauss, log-normal or flat probability density function (PDF), though since a Monte Carlo approach is chosen to perform the necessary integrals a generalization to other parameterizations is particularly simple. Full correlation between signal and background efficiency is optional. The ordering schemes for frequentist construction currently supported are the likelihood ratio ordering (also known as Feldman-Cousins) and Neyman ordering. Optionally, both schemes can be used with conditioning, meaning the probability density function is conditioned on the fact that the actual outcome of the background process can not have been larger than the number of observed events. Program summaryTitle of program: POLE version 1.0 Catalogue identifier: ADTA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTA Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Computer for which the program is designed: DELL PC 1 GB 2.0 Ghz Pentium IV Operating system under which the program has been tested: RH Linux 7.2 Kernel 2.4.7-10 Programming language used: Fortran 77 Memory required to execute with typical data: ˜1.6 Mbytes No. of bytes in distributed program, including test data, etc.: 373745 No. of lines in distributed program, including test data, etc.: 2700 Distribution format: tar gzip file Keywords: Confidence interval calculation, Systematic uncertainties Nature of the physical problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with known background in presence of

  6. Replication, "p[subscript rep]," and Confidence Intervals: Comment Prompted by Iverson, Wagenmakers, and Lee (2010); Lecoutre, Lecoutre, and Poitevineau (2010); and Maraun and Gabriel (2010)

    ERIC Educational Resources Information Center

    Cumming, Geoff

    2010-01-01

    This comment offers three descriptions of "p[subscript rep]" that start with a frequentist account of confidence intervals, draw on R. A. Fisher's fiducial argument, and do not make Bayesian assumptions. Links are described among "p[subscript rep]," "p" values, and the probability a confidence interval will capture the mean of a replication…

  7. Statistical damage detection method for frame structures using a confidence interval

    NASA Astrophysics Data System (ADS)

    Li, Weiming; Zhu, Hongping; Luo, Hanbin; Xia, Yong

    2010-03-01

    A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.

  8. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-06-27

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.

  9. Confidence intervals for the difference between independent binomial proportions: comparison using a graphical approach and moving averages.

    PubMed

    Laud, Peter J; Dane, Aaron

    2014-01-01

    This paper uses graphical methods to illustrate and compare the coverage properties of a number of methods for calculating confidence intervals for the difference between two independent binomial proportions. We investigate both small-sample and large-sample properties of both two-sided and one-sided coverage, with an emphasis on asymptotic methods. In terms of aligning the smoothed coverage probability surface with the nominal confidence level, we find that the score-based methods on the whole have the best two-sided coverage, although they have slight deficiencies for confidence levels of 90% or lower. For an easily taught, hand-calculated method, the Brown-Li 'Jeffreys' method appears to perform reasonably well, and in most situations, it has better one-sided coverage than the widely recommended alternatives. In general, we find that the one-sided properties of many of the available methods are surprisingly poor. In fact, almost none of the existing asymptotic methods achieve equal coverage on both sides of the interval, even with large sample sizes, and consequently if used as a non-inferiority test, the type I error rate (which is equal to the one-sided non-coverage probability) can be inflated. The only exception is the Gart-Nam 'skewness-corrected' method, which we express using modified notation in order to include a bias correction for improved small-sample performance, and an optional continuity correction for those seeking more conservative coverage. Using a weighted average of two complementary methods, we also define a new hybrid method that almost matches the performance of the Gart-Nam interval.

  10. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-01-01

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision. PMID:23913390

  11. The naïve intuitive statistician: a naïve sampling model of intuitive confidence intervals.

    PubMed

    Juslin, Peter; Winman, Anders; Hansson, Patrik

    2007-07-01

    The perspective of the naïve intuitive statistician is outlined and applied to explain overconfidence when people produce intuitive confidence intervals and why this format leads to more overconfidence than other formally equivalent formats. The naïve sampling model implies that people accurately describe the sample information they have but are naïve in the sense that they uncritically take sample properties as estimates of population properties. A review demonstrates that the naïve sampling model accounts for the robust and important findings in previous research as well as provides novel predictions that are confirmed, including a way to minimize the overconfidence with interval production. The authors discuss the naïve sampling model as a representative of models inspired by the naïve intuitive statistician. PMID:17638502

  12. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    PubMed

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation.

  13. Estimating incremental cost-effectiveness ratios and their confidence intervals with different terminating events for survival time and costs.

    PubMed

    Chen, Shuai; Zhao, Hongwei

    2013-07-01

    Cost-effectiveness analysis (CEA) is an important component of the economic evaluation of new treatment options. In many clinical and observational studies of costs, censored data pose challenges to the CEA. We consider a special situation where the terminating events for the survival time and costs are different. Traditional methods for statistical inference offer no means for dealing with censored data in these circumstances. To address this gap, we propose a new method for deriving the confidence interval for the incremental cost-effectiveness ratio. The simulation studies and real data example show that our method performs very well for some practical settings, revealing a great potential for application to actual settings in which terminating events for the survival time and costs differ.

  14. Bootstrap confidence intervals for the mean correlation corrected for Case IV range restriction: a more adequate procedure for meta-analysis.

    PubMed

    Li, Johnson Ching-Hong; Cui, Ying; Chan, Wai

    2013-01-01

    In this study, we proposed to use the nonparametric bootstrap procedure to construct the confidence interval for the mean correlation r corrected for Case IV range restriction in meta-analysis (i.e., ; Hunter, Schmidt, & Le, 2006). A comprehensive Monte Carlo study was conducted to evaluate the accuracy of the parametric confidence interval and 3 nonparametric bootstrap confidence intervals for r(c4). Of the 4 intervals, our results showed that the bootstrap bias-corrected and accelerated percentile interval (BCaI) yielded the most accurate results across different data situations. In addition, the mean-corrected correlation r(c4) was found to be more accurate than the uncorrected estimate. Implications of the mean-corrected correlation r(c4) and BCaI in organizational studies are also discussed.

  15. Solar PV power generation forecasting using hybrid intelligent algorithms and uncertainty quantification based on bootstrap confidence intervals

    NASA Astrophysics Data System (ADS)

    AlHakeem, Donna Ibrahim

    This thesis focuses on short-term photovoltaic forecasting (STPVF) for the power generation of a solar PV system using probabilistic forecasts and deterministic forecasts. Uncertainty estimation, in the form of a probabilistic forecast, is emphasized in this thesis to quantify the uncertainties of the deterministic forecasts. Two hybrid intelligent models are proposed in two separate chapters to perform the STPVF. In Chapter 4, the framework of the deterministic proposed hybrid intelligent model is presented, which is a combination of wavelet transform (WT) that is a data filtering technique and a soft computing model (SCM) that is generalized regression neural network (GRNN). Additionally, this chapter proposes a model that is combined as WT+GRNN and is utilized to conduct the forecast of two random days in each season for 1-hour-ahead to find the power generation. The forecasts are analyzed utilizing accuracy measures equations to determine the model performance and compared with another SCM. In Chapter 5, the framework of the proposed model is presented, which is a combination of WT, a SCM based on radial basis function neural network (RBFNN), and a population-based stochastic particle swarm optimization (PSO). Chapter 5 proposes a model combined as a deterministic approach that is represented as WT+RBFNN+PSO, and then a probabilistic forecast is conducted utilizing bootstrap confidence intervals to quantify uncertainty from the output of WT+RBFNN+PSO. In Chapter 5, the forecasts are conducted by furthering the tests done in Chapter 4. Chapter 5 forecasts the power generation of two random days in each season for 1-hour-ahead, 3-hour-ahead, and 6-hour-ahead. Additionally, different types of days were also forecasted in each season such as a sunny day (SD), cloudy day (CD), and a rainy day (RD). These forecasts were further analyzed using accuracy measures equations, variance and uncertainty estimation. The literature that is provided supports that the proposed

  16. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014).

    PubMed

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

  17. Limitation of individual internal exposure by consideration of the confidence interval in routine personal dosimetry at the Chernobyl Sarcophagus.

    PubMed

    Bondarenko, O O; Melnychuk, D V; Medvedev, S Yu

    2003-01-01

    In view of the probabilistic nature and very wide uncertainty of internal exposure assessment, its deterministic ('precise') assessment does not protect against not exceeding established reference levels or even the dose limits for a particular individual. Minimising such potential risks can be achieved by setting up a sufficiently wide confidence interval for an expected dose distribution instead of its average ('best' estimate) value, and by setting the limit at the 99% fractile level. The ratio of the 99% level and the mean ('best' estimate) is referred to as the safety coefficient. It is shown for the typical radiological conditions inside the Chernobyl Sarcophagus that the safety coefficient corresponding to the 99% fractile of the expected internal dose distribution varies within the range from 5 to 10. The maintenance of minimum uncertainty and sufficient sensitivity of the indirect dosimetry method requires measurement of individual daily urinary excretion of 239Pu at a level of at least 4 x 10(-5) Bq. For the purpose of reducing the uncertainty of individual internal dose assessment and making dosimetric methods workable. it is suggested that the results of workplace monitoring are combined with the results of periodic urinary and faecal bioassay measurements.

  18. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014).

    PubMed

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed.

  19. Temperature dependence of the rate and activation parameters for tert-butyl chloride solvolysis: Monte Carlo simulation of confidence intervals

    NASA Astrophysics Data System (ADS)

    Sung, Dae Dong; Kim, Jong-Youl; Lee, Ikchoon; Chung, Sung Sik; Park, Kwon Ha

    2004-07-01

    The solvolysis rate constants ( kobs) of tert-butyl chloride are measured in 20%(v/v) 2-PrOH-H 2O mixture at 15 temperatures ranging from 0 to 39 °C. Examination of the temperature dependence of the rate constants by the weighted least squares fitting to two to four terms equations has led to the three-term form, ln kobs= a1+ a2T-1+ a3ln T, as the best expression. The activation parameters, ΔH ‡ and ΔS ‡, calculated by using three constants a1, a2 and a3 revealed the steady decrease of ≈1 kJ mol -1 per degree and 3.5 J K -1 mol -1 per degree, respectively, as the temperature rises. The sign change of ΔS ‡ at ≈20.0 °C and the large negative heat capacity of activation, ΔC p‡=-1020 J K -1 mol -1, derived are interpreted to indicate an S N1 mechanism and a net change from water structure breaking to electrostrictive solvation due to the partially ionic transition state. Confidence intervals estimated by the Monte Carlo method are far more precise than those by the conventional method.

  20. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014)

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157–1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

  1. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies.

    PubMed

    Parks, Nathan A; Gannon, Matthew A; Long, Stephanie M; Young, Madeleine E

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB ) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets.

  2. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    PubMed Central

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  3. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

    NASA Astrophysics Data System (ADS)

    Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

    2011-10-01

    Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

  4. Confidence Interval Methods for Coefficient Alpha on the Basis of Discrete, Ordinal Response Items: Which One, If Any, Is the Best?

    ERIC Educational Resources Information Center

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Owens, Corina M.; Scott, Heather M.

    2011-01-01

    In this study, the authors aimed to examine 8 of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions wherein the underlying item…

  5. Effect of Minimum Cell Sizes and Confidence Interval Sizes for Special Education Subgroups on School-Level AYP Determinations. Synthesis Report 61

    ERIC Educational Resources Information Center

    Simpson, Mary Ann; Gong, Brian; Marion, Scott

    2006-01-01

    This study addresses three questions: First, considering the full group of students and the special education subgroup, what is the likely effect of minimum cell size and confidence interval size on school-level Adequate Yearly Progress (AYP) determinations? Second, what effects do the changing minimum cell sizes have on inclusion of special…

  6. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  7. Confidence interval estimation for an empirical model quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this work, we address uncertainty analysis for a model, presented in a companion paper, quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance. To achieve this we present several methods for confidence interval estimation. Estimation ...

  8. Investigating the effect of modeling single-vehicle and multi-vehicle crashes separately on confidence intervals of Poisson-gamma models.

    PubMed

    Geedipally, Srinivas Reddy; Lord, Dominique

    2010-07-01

    Crash prediction models still constitute one of the primary tools for estimating traffic safety. These statistical models play a vital role in various types of safety studies. With a few exceptions, they have often been employed to estimate the number of crashes per unit of time for an entire highway segment or intersection, without distinguishing the influence different sub-groups have on crash risk. The two most important sub-groups that have been identified in the literature are single- and multi-vehicle crashes. Recently, some researchers have noted that developing two distinct models for these two categories of crashes provides better predicting performance than developing models combining both crash categories together. Thus, there is a need to determine whether a significant difference exists for the computation of confidence intervals when a single model is applied rather than two distinct models for single- and multi-vehicle crashes. Building confidence intervals have many important applications in highway safety. This paper investigates the effect of modeling single- and multi-vehicle (head-on and rear-end only) crashes separately versus modeling them together on the prediction of confidence intervals of Poisson-gamma models. Confidence intervals were calculated for total (all severities) crash models and fatal and severe injury crash models. The data used for the comparison analysis were collected on Texas multilane undivided highways for the years 1997-2001. This study shows that modeling single- and multi-vehicle crashes separately predicts larger confidence intervals than modeling them together as a single model. This difference is much larger for fatal and injury crash models than for models for all severity levels. Furthermore, it is found that the single- and multi-vehicle crashes are not independent. Thus, a joint (bivariate) model which accounts for correlation between single- and multi-vehicle crashes is developed and it predicts wider

  9. Confidence intervals for time averages in the presence of long-range correlations, a case study on Earth surface temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, M.; Kantz, H.

    2016-09-01

    Time averages, a standard tool in the analysis of environmental data, suffer severely from long-range correlations. The sample size needed to obtain a desired small confidence interval can be dramatically larger than for uncorrelated data. We present quantitative results for short- and long-range correlated Gaussian stochastic processes. Using these, we calculate confidence intervals for time averages of surface temperature measurements. Temperature time series are well known to be long-range correlated with Hurst exponents larger than 1/2. Multidecadal time averages are routinely used in the study of climate change. Our analysis shows that uncertainties of such averages are as large as for a single year of uncorrelated data.

  10. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

    ERIC Educational Resources Information Center

    Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-01-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

  11. Evaluating the Impact of Guessing and Its Interactions with Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    ERIC Educational Resources Information Center

    Paek, Insu

    2016-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…

  12. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    ERIC Educational Resources Information Center

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  13. On the Proper Estimation of the Confidence Interval for the Design Formula of Blast-Induced Vibrations with Site Records

    NASA Astrophysics Data System (ADS)

    Yan, W. M.; Yuen, Ka-Veng

    2015-01-01

    Blast-induced ground vibration has received much engineering and public attention. The vibration is often represented by the peak particle velocity (PPV) and the empirical approach is employed to describe the relationship between the PPV and the scaled distance. Different statistical methods are often used to obtain the confidence level of the prediction. With a known scaled distance, the amount of explosives in a planned blast can then be determined by a blast engineer when the PPV limit and the confidence level of the vibration magnitude are specified. This paper shows that these current approaches do not incorporate the posterior uncertainty of the fitting coefficients. In order to resolve this problem, a Bayesian method is proposed to derive the site-specific fitting coefficients based on a small amount of data collected at an early stage of a blasting project. More importantly, uncertainty of both the fitting coefficients and the design formula can be quantified. Data collected from a site formation project in Hong Kong is used to illustrate the performance of the proposed method. It is shown that the proposed method resolves the underestimation problem in one of the conventional approaches. The proposed approach can be easily conducted using spreadsheet calculation without the need for any additional tools, so it will be particularly welcome by practicing engineers.

  14. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  15. Detection of anomalous diffusion using confidence intervals of the scaling exponent with application to preterm neonatal heart rate variability

    NASA Astrophysics Data System (ADS)

    Bickel, David R.; Verklan, M. Terese; Moon, Jon

    1998-11-01

    The scaling exponent of the root mean square (rms) displacement quantifies the roughness of fractal or multifractal time series; it is equivalent to other second-order measures of scaling, such as the power-law exponents of the spectral density and autocorrelation function. For self-similar time series, the rms scaling exponent equals the Hurst parameter, which is related to the fractal dimension. A scaling exponent of 0.5 implies that the process is normal diffusion, which is equivalent to an uncorrelated random walk; otherwise, the process can be modeled as anomalous diffusion. Higher exponents indicate that the increments of the signal have positive correlations, while exponents below 0.5 imply that they have negative correlations. Scaling exponent estimates of successive segments of the increments of a signal are used to test the null hypothesis that the signal is normal diffusion, with the alternate hypothesis that the diffusion is anomalous. Dispersional analysis, a simple technique which does not require long signals, is used to estimate the scaling exponent from the slope of the linear regression of the logarithm of the standard deviation of binned data points on the logarithm of the number of points per bin. Computing the standard error of the scaling exponent using successive segments of the signal is superior to previous methods of obtaining the standard error, such as that based on the sum of squared errors used in the regression; the regression error is more of a measure of the deviation from power-law scaling than of the uncertainty of the scaling exponent estimate. Applying this test to preterm neonate heart rate data, it is found that time intervals between heart beats can be modeled as anomalous diffusion with negatively correlated increments. This corresponds to power spectra between 1/f2 and 1/f, whereas healthy adults are usually reported to have 1/f spectra, suggesting that the immaturity of the neonatal nervous system affects the scaling

  16. User guide to the UNC process and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

    USGS Publications Warehouse

    Christensen, Steen; Cooley, Richard L.

    2006-01-01

    This report introduces and documents the Uncertainty (UNC) Process, a new Process in MODFLOW-2000 that calculates uncertainty measures for model parameters and for predictions produced by the model. Uncertainty measures can be computed by various methods, but when regression is applied to calibrate a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence and prediction intervals for most types of functions that can be computed by a MODFLOW-2000 model calibrated by the Parameter-Estimation Process. The types of functions for which the Process works include hydraulic heads, hydraulic head differences, head-dependent flows computed by the head-dependent flow packages for drains (DRN6), rivers (RIV6), general-head boundaries (GHB6), streams (STR6), drain-return cells (DRT1), and constant-head boundaries (CHD), and for differences between flows computed by any of the mentioned flow packages. The UNC Process does not allow computation of intervals for the difference between flows computed by two different flow packages. The report also documents three programs, RESAN2-2k, BEALE2-2k, and CORFAC-2k, which are valuable for the evaluation of results from the Parameter-Estimation Process and for the preparation of input values for the UNC Process. RESAN2-2k and BEALE2-2k are significant updates of the residual analysis and modified Beale's measure programs first published by Cooley and Naff (1990) and later modified for use with MODFLOWP (Hill, 1994) and MODFLOW-2000 (Hill and others, 2000). CORFAC-2k is a new program that computes correction factors to be used by UNC.

  17. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample. PMID:12220133

  18. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample.

  19. Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

    SciTech Connect

    ROMERO,VICENTE J.

    2000-05-04

    In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

  20. Generating confidence intervals on biological networks

    PubMed Central

    Thorne, Thomas; Stumpf, Michael PH

    2007-01-01

    Background In the analysis of networks we frequently require the statistical significance of some network statistic, such as measures of similarity for the properties of interacting nodes. The structure of the network may introduce dependencies among the nodes and it will in general be necessary to account for these dependencies in the statistical analysis. To this end we require some form of Null model of the network: generally rewired replicates of the network are generated which preserve only the degree (number of interactions) of each node. We show that this can fail to capture important features of network structure, and may result in unrealistic significance levels, when potentially confounding additional information is available. Methods We present a new network resampling Null model which takes into account the degree sequence as well as available biological annotations. Using gene ontology information as an illustration we show how this information can be accounted for in the resampling approach, and the impact such information has on the assessment of statistical significance of correlations and motif-abundances in the Saccharomyces cerevisiae protein interaction network. An algorithm, GOcardShuffle, is introduced to allow for the efficient construction of an improved Null model for network data. Results We use the protein interaction network of S. cerevisiae; correlations between the evolutionary rates and expression levels of interacting proteins and their statistical significance were assessed for Null models which condition on different aspects of the available data. The novel GOcardShuffle approach results in a Null model for annotated network data which appears better to describe the properties of real biological networks. Conclusion An improved statistical approach for the statistical analysis of biological network data, which conditions on the available biological information, leads to qualitatively different results compared to approaches which ignore such annotations. In particular we demonstrate the effects of the biological organization of the network can be sufficient to explain the observed similarity of interacting proteins. PMID:18053130

  1. Confidence building

    NASA Astrophysics Data System (ADS)

    Roederer, Juan G.

    Many conferences are being held on confidence building in many countries. Usually they are organized and attended by political scientists and science policy specialists. A remarkable exception, in which the main brainstorming was done by “grass roots” geophysicists, nuclear physicists, engineers and ecologists, was a meeting in July at St. John's College in Santa Fe, N. Mex.The aim of the conference Technology-Based Confidence Building: Energy and Environment was to survey programs of international cooperation in pertinent areas of mutual concern to all nations and to identify new initiatives that could contribute to enhanced international stability, with emphasis on cooperation between the U.S. and U.S.S.R.

  2. Confidence bounds on structural reliability

    NASA Technical Reports Server (NTRS)

    Mehta, S. R.; Cruse, T. A.; Mahadevan, S.

    1993-01-01

    Different approaches for quantifying physical, statistical, and model uncertainties associated with the distribution parameters which are aimed at determining structural reliability are described. Confidence intervals on the distribution parameters of the input random variables are estimated using four algorithms to evaluate uncertainty of the response. Design intervals are evaluated using either Monte Carlo simulation or an iterative approach. A first order approach can be used to compute a first approximation of the design interval, but its accuracy is not satisfactory. The regression approach which combines the iterative approach with Monte Carlo simulation is capable of providing good results if the performance function can be accurately represented using regression analysis. It is concluded that the design interval-based approach seems to be quite general and takes into account distribution and model uncertainties.

  3. Confidence bounds on structural reliability

    NASA Astrophysics Data System (ADS)

    Mehta, S. R.; Cruse, T. A.; Mahadevan, S.

    1993-04-01

    Different approaches for quantifying physical, statistical, and model uncertainties associated with the distribution parameters which are aimed at determining structural reliability are described. Confidence intervals on the distribution parameters of the input random variables are estimated using four algorithms to evaluate uncertainty of the response. Design intervals are evaluated using either Monte Carlo simulation or an iterative approach. A first order approach can be used to compute a first approximation of the design interval, but its accuracy is not satisfactory. The regression approach which combines the iterative approach with Monte Carlo simulation is capable of providing good results if the performance function can be accurately represented using regression analysis. It is concluded that the design interval-based approach seems to be quite general and takes into account distribution and model uncertainties.

  4. Confidant Relations in Italy

    PubMed Central

    Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

    2015-01-01

    Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

  5. Confidant Relations in Italy.

    PubMed

    Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

    2015-02-01

    Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

  6. Application of Sequential Interval Estimation to Adaptive Mastery Testing

    ERIC Educational Resources Information Center

    Chang, Yuan-chin Ivan

    2005-01-01

    In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…

  7. Understanding Academic Confidence

    ERIC Educational Resources Information Center

    Sander, Paul; Sanders, Lalage

    2006-01-01

    This paper draws on the psychological theories of self-efficacy and the self-concept to understand students' self-confidence in academic study in higher education as measured by the Academic Behavioural Confidence scale (ABC). In doing this, expectancy-value theory and self-efficacy theory are considered and contrasted with self-concept and…

  8. Confidence Intervals for Standardized Linear Contrasts of Means

    ERIC Educational Resources Information Center

    Bonnett, Douglas G.

    2008-01-01

    Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to…

  9. Estimation of Confidence Intervals for Multiplication and Efficiency

    SciTech Connect

    Verbeke, J

    2009-07-17

    Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoretical count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.

  10. Technological Pedagogical Content Knowledge (TPACK) Literature Using Confidence Intervals

    ERIC Educational Resources Information Center

    Young, Jamaal R.; Young, Jemimah L.; Shaker, Ziad

    2012-01-01

    The validity and reliability of Technological Pedagogical Content Knowledge (TPACK) as a framework to measure the extent to which teachers can teach with technology hinges on the ability to aggregate results across empirical studies. The results of data collected using the survey of pre-service teacher knowledge of teaching with technology (TKTT)…

  11. Interval Training.

    ERIC Educational Resources Information Center

    President's Council on Physical Fitness and Sports, Washington, DC.

    Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…

  12. Responsibility and confidence

    PubMed Central

    Austin, Zubin

    2013-01-01

    Background: Despite the changing role of the pharmacist in patient-centred practice, pharmacists anecdotally reported little confidence in their clinical decision-making skills and do not feel responsible for their patients. Observational findings have suggested these trends within the profession, but there is a paucity of evidence to explain why. We conducted an exploratory study with an objective to identify reasons for the lack of responsibility and/or confidence in various pharmacy practice settings. Methods: Pharmacist interviews were conducted via written response, face-to-face or telephone. Seven questions were asked on the topic of responsibility and confidence as it applies to pharmacy practice and how pharmacists think these themes differ in medicine. Interview transcripts were analyzed and divided by common theme. Quotations to support these themes are presented. Results: Twenty-nine pharmacists were asked to participate, and 18 responded (62% response rate). From these interviews, 6 themes were identified as barriers to confidence and responsibility: hierarchy of the medical system, role definitions, evolution of responsibility, ownership of decisions for confidence building, quality and consequences of mentorship and personality traits upon admission. Discussion: We identified 6 potential barriers to the development of pharmacists’ self-confidence and responsibility. These findings have practical applicability for educational research, future curriculum changes, experiential learning structure and pharmacy practice. Due to bias and the limitations of this form of exploratory research and small sample size, evidence should be interpreted cautiously. Conclusion: Pharmacists feel neither responsible nor confident for their clinical decisions due to social, educational, experiential and personal reasons. Can Pharm J 2013;146:155-161. PMID:23795200

  13. Confidence Calculation with AMV+

    SciTech Connect

    Fossum, A.F.

    1999-02-19

    The iterative advanced mean value algorithm (AMV+), introduced nearly ten years ago, is now widely used as a cost-effective probabilistic structural analysis tool when the use of sampling methods is cost prohibitive (Wu et al., 1990). The need to establish confidence bounds on calculated probabilities arises because of the presence of uncertainties in measured means and variances of input random variables. In this paper an algorithm is proposed that makes use of the AMV+ procedure and analytically derived probability sensitivities to determine confidence bounds on calculated probabilities.

  14. Adding Confidence to Knowledge

    ERIC Educational Resources Information Center

    Goodson, Ludwika Aniela; Slater, Don; Zubovic, Yvonne

    2015-01-01

    A "knowledge survey" and a formative evaluation process led to major changes in an instructor's course and teaching methods over a 5-year period. Design of the survey incorporated several innovations, including: a) using "confidence survey" rather than "knowledge survey" as the title; b) completing an…

  15. Predicting Systemic Confidence

    ERIC Educational Resources Information Center

    Falke, Stephanie Inez

    2009-01-01

    Using a mixed method approach, this study explored which educational factors predicted systemic confidence in master's level marital and family therapy (MFT) students, and whether or not the impact of these factors was influenced by student beliefs and their perception of their supervisor's beliefs about the value of systemic practice. One hundred…

  16. SystemConfidence

    SciTech Connect

    Josh Lothian, Jeff Kuehn

    2012-09-25

    SystemConfidence is a benchmark developed at ORNL which can measure statistical variation in which the user can plot. The portions of the code which manage the collection of the histograms and computing statistics on the histograms were designed with the intent that we could use these functions in other codes.

  17. Reclaim your creative confidence.

    PubMed

    Kelley, Tom; Kelley, David

    2012-12-01

    Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with.

  18. Reclaim your creative confidence.

    PubMed

    Kelley, Tom; Kelley, David

    2012-12-01

    Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with. PMID:23227579

  19. Improved investor confidence

    SciTech Connect

    Anderson, J.

    1995-10-01

    Results of a financial ranking survey of power projects show reasonably strong activity when compared to previous surveys. Perhaps the most notable trend is the continued increase in the number of international deals being reported. Nearly 62 percent of the transactions reported were for non-US projects. This increase will likely expand with time as developers and lenders gain confidence in certain regions. For the remainder of 1995 and into 1996 it is likely that financial activity will continue at a steady pace. A number of projects in various markets are poised to reach financial close relatively soon. Developers, investment bankers, and governments are all gaining experience and becoming more comfortable with the process.

  20. Confidence in Numerical Simulations

    SciTech Connect

    Hemez, Francois M.

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  1. Confidence and Cognitive Test Performance

    ERIC Educational Resources Information Center

    Stankov, Lazar; Lee, Jihyun

    2008-01-01

    This article examines the nature of confidence in relation to abilities, personality, and metacognition. Confidence scores were collected during the administration of Reading and Listening sections of the Test of English as a Foreign Language Internet-Based Test (TOEFL iBT) to 824 native speakers of English. Those confidence scores were correlated…

  2. Monitoring tigers with confidence.

    PubMed

    Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark

    2010-12-01

    With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention.

  3. Monitoring tigers with confidence.

    PubMed

    Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark

    2010-12-01

    With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. PMID:21392352

  4. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  5. Why Aren't They Called Probability Intervals?

    ERIC Educational Resources Information Center

    Devlin, Thomas F.

    2008-01-01

    This article offers suggestions for teaching confidence intervals, a fundamental statistical tool often misinterpreted by beginning students. A historical perspective presenting the interpretation given by their inventor is supported with examples and the use of technology. A method for determining confidence intervals for the seldom-discussed…

  6. Measuring Vaccine Confidence: Introducing a Global Vaccine Confidence Index

    PubMed Central

    Larson, Heidi J; Schulz, William S; Tucker, Joseph D; Smith, David M D

    2015-01-01

    Background. Public confidence in vaccination is vital to the success of immunisation programmes worldwide. Understanding the dynamics of vaccine confidence is therefore of great importance for global public health. Few published studies permit global comparisons of vaccination sentiments and behaviours against a common metric. This article presents the findings of a multi-country survey of confidence in vaccines and immunisation programmes in Georgia, India, Nigeria, Pakistan, and the United Kingdom (UK) – these being the first results of a larger project to map vaccine confidence globally. Methods. Data were collected from a sample of the general population and from those with children under 5 years old against a core set of confidence questions. All surveys were conducted in the relevant local-language in Georgia, India, Nigeria, Pakistan, and the UK. We examine confidence in immunisation programmes as compared to confidence in other government health services, the relationships between confidence in the system and levels of vaccine hesitancy, reasons for vaccine hesitancy, ultimate vaccination decisions, and their variation based on country contexts and demographic factors. Results. The numbers of respondents by country were: Georgia (n=1000); India (n=1259); Pakistan (n=2609); UK (n=2055); Nigerian households (n=12554); and Nigerian health providers (n=1272). The UK respondents with children under five years of age were more likely to hesitate to vaccinate, compared to other countries. Confidence in immunisation programmes was more closely associated with confidence in the broader health system in the UK (Spearman’s ρ=0.5990), compared to Nigeria (ρ=0.5477), Pakistan (ρ=0.4491), and India (ρ=0.4240), all of which ranked confidence in immunisation programmes higher than confidence in the broader health system. Georgia had the highest rate of vaccine refusals (6 %) among those who reported initial hesitation. In all other countries surveyed most

  7. Confidant Relations of the Aged.

    ERIC Educational Resources Information Center

    Tigges, Leann M.; And Others

    The confidant relationship is a qualitatively distinct dimension of the emotional support system of the aged, yet the composition of the confidant network has been largely neglected in research on aging. Persons (N=940) 60 years of age and older were interviewed about their socio-environmental setting. From the enumeration of their relatives,…

  8. Confidence rating for eutrophication assessments.

    PubMed

    Brockmann, Uwe H; Topcu, Dilek H

    2014-05-15

    Confidence of monitoring data is dependent on their variability and representativeness of sampling in space and time. Whereas variability can be assessed as statistical confidence limits, representative sampling is related to equidistant sampling, considering gradients or changing rates at sampling gaps. By the proposed method both aspects are combined, resulting in balanced results for examples of total nitrogen concentrations in the German Bight/North Sea. For assessing sampling representativeness surface areas, vertical profiles and time periods are divided into regular sections for which individually the representativeness is calculated. The sums correspond to the overall representativeness of sampling in the defined area/time period. Effects of not sampled sections are estimated along parallel rows by reducing their confidence, considering their distances to next sampled sections and the interrupted gradients/changing rates. Confidence rating of time sections is based on maximum differences of sampling rates at regular time steps and related means of concentrations.

  9. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  10. Predictive intervals for age-specific fertility.

    PubMed

    Keilman, N; Pham, D Q

    2000-03-01

    A multivariate ARIMA model is combined with a Gamma curve to predict confidence intervals for age-specific birth rates by 1-year age groups. The method is applied to observed age-specific births in Norway between 1900 and 1995, and predictive intervals are computed for each year up to 2050. The predicted two-thirds confidence intervals for Total Fertility (TF) around 2010 agree well with TF errors in old population forecasts made by Statistics Norway. The method gives useful predictions for age-specific fertility up to the years 2020-30. For later years, the intervals become too wide. Methods that do not take into account estimation errors in the ARIMA model coefficients underestimate the uncertainty for future TF values. The findings suggest that the margin between high and low fertility variants in official population forecasts for many Western countries are too narrow. PMID:12158991

  11. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making

    PubMed Central

    Bahrami, Bahador; Latham, Peter E.

    2015-01-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people’s confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality. PMID:26517475

  12. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    PubMed

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality. PMID:26517475

  13. Assessing uncertainty in reference intervals via tolerance intervals: application to a mixed model describing HIV infection.

    PubMed

    Katki, Hormuzd A; Engels, Eric A; Rosenberg, Philip S

    2005-10-30

    We define the reference interval as the range between the 2.5th and 97.5th percentiles of a random variable. We use reference intervals to compare characteristics of a marker of disease progression between affected populations. We use a tolerance interval to assess uncertainty in the reference interval. Unlike the tolerance interval, the estimated reference interval does not contains the true reference interval with specified confidence (or credibility). The tolerance interval is easy to understand, communicate and visualize. We derive estimates of the reference interval and its tolerance interval for markers defined by features of a linear mixed model. Examples considered are reference intervals for time trends in HIV viral load, and CD4 per cent, in HIV-infected haemophiliac children and homosexual men. We estimate the intervals with likelihood methods and also develop a Bayesian model in which the parameters are estimated via Markov-chain Monte Carlo. The Bayesian formulation naturally overcomes some important limitations of the likelihood model. PMID:16189804

  14. Addressing the vaccine confidence gap.

    PubMed

    Larson, Heidi J; Cooper, Louis Z; Eskola, Juhani; Katz, Samuel L; Ratzan, Scott

    2011-08-01

    Vaccines--often lauded as one of the greatest public health interventions--are losing public confidence. Some vaccine experts have referred to this decline in confidence as a crisis. We discuss some of the characteristics of the changing global environment that are contributing to increased public questioning of vaccines, and outline some of the specific determinants of public trust. Public decision making related to vaccine acceptance is neither driven by scientific nor economic evidence alone, but is also driven by a mix of psychological, sociocultural, and political factors, all of which need to be understood and taken into account by policy and other decision makers. Public trust in vaccines is highly variable and building trust depends on understanding perceptions of vaccines and vaccine risks, historical experiences, religious or political affiliations, and socioeconomic status. Although provision of accurate, scientifically based evidence on the risk-benefit ratios of vaccines is crucial, it is not enough to redress the gap between current levels of public confidence in vaccines and levels of trust needed to ensure adequate and sustained vaccine coverage. We call for more research not just on individual determinants of public trust, but on what mix of factors are most likely to sustain public trust. The vaccine community demands rigorous evidence on vaccine efficacy and safety and technical and operational feasibility when introducing a new vaccine, but has been negligent in demanding equally rigorous research to understand the psychological, social, and political factors that affect public trust in vaccines. PMID:21664679

  15. Addressing the vaccine confidence gap.

    PubMed

    Larson, Heidi J; Cooper, Louis Z; Eskola, Juhani; Katz, Samuel L; Ratzan, Scott

    2011-08-01

    Vaccines--often lauded as one of the greatest public health interventions--are losing public confidence. Some vaccine experts have referred to this decline in confidence as a crisis. We discuss some of the characteristics of the changing global environment that are contributing to increased public questioning of vaccines, and outline some of the specific determinants of public trust. Public decision making related to vaccine acceptance is neither driven by scientific nor economic evidence alone, but is also driven by a mix of psychological, sociocultural, and political factors, all of which need to be understood and taken into account by policy and other decision makers. Public trust in vaccines is highly variable and building trust depends on understanding perceptions of vaccines and vaccine risks, historical experiences, religious or political affiliations, and socioeconomic status. Although provision of accurate, scientifically based evidence on the risk-benefit ratios of vaccines is crucial, it is not enough to redress the gap between current levels of public confidence in vaccines and levels of trust needed to ensure adequate and sustained vaccine coverage. We call for more research not just on individual determinants of public trust, but on what mix of factors are most likely to sustain public trust. The vaccine community demands rigorous evidence on vaccine efficacy and safety and technical and operational feasibility when introducing a new vaccine, but has been negligent in demanding equally rigorous research to understand the psychological, social, and political factors that affect public trust in vaccines.

  16. Confidence-Based Feature Acquisition

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

    2010-01-01

    Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

  17. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.

  18. A comparison of rural and urban anticoagulation management of atrial fibrillation in a southwest Missouri health system.

    PubMed

    Hover, Alexander R; Rogers, James T; Hunt, Carla

    2003-01-01

    The purpose of this study is to determine if an opportunity exists to improve anticoagulation therapy for non-hospitalized, chronic atrial fibrillation patients cared for by St. John's Health System physicians. Clinical chart review consisted of 200 patients in both the urban and rural practice groups. Urban practices were found to have 95 percent of the cases receiving warfarin, 95 percent confidence interval (90-100). Rural practices were found to have 97 percent of the cases anticoagulated, 95 percent confidence interval (88-99).

  19. Overconfidence in Interval Estimates: What Does Expertise Buy You?

    ERIC Educational Resources Information Center

    McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan

    2008-01-01

    People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…

  20. Confidence in ASCI scientific simulations

    SciTech Connect

    Ang, J.A.; Trucano, T.G.; Luginbuhl, D.R.

    1998-06-01

    The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.

  1. A confidence paradigm for classification systems

    NASA Astrophysics Data System (ADS)

    Leap, Nathan J.; Bauer, Kenneth W., Jr.

    2008-04-01

    There is no universally accepted methodology to determine how much confidence one should have in a classifier output. This research proposes a framework to determine the level of confidence in an indication from a classifier system where the output is a measurement value. There are two types of confidence developed in this paper. The first is confidence in a classification system or classifier and is denoted classifier confidence. The second is the confidence in the output of a classification system or classifier. In this paradigm, we posit that the confidence in the output of a classifier should be, on average, equal to the confidence in the classifier as a whole (i.e., classifier confidence). The amount of confidence in a given classifier is estimated using multiattribute preference theory and forms the foundation for a quadratic confidence function that is applied to posterior probability estimates. Classifier confidence is currently determined based upon individual measurable value functions for classification accuracy, average entropy, and sample size, and the form of the overall measurable value function is multilinear based upon the assumption of weak difference independence. Using classifier confidence, a quadratic function is trained to be the confidence function which inputs a posterior probability and outputs the confidence in a given indication. In this paradigm, confidence is not equal to the posterior probability estimate but is related to it. This confidence measure is a direct link between traditional decision analysis techniques and traditional pattern recognition techniques. This methodology is applied to two real world data sets, and results show the sort of behavior that would be expected from a rational confidence measure.

  2. Meta-Analytic Interval Estimation for Standardized and Unstandardized Mean Differences

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected…

  3. A Mathematical Framework for Statistical Decision Confidence.

    PubMed

    Hangya, Balázs; Sanders, Joshua I; Kepecs, Adam

    2016-09-01

    Decision confidence is a forecast about the probability that a decision will be correct. From a statistical perspective, decision confidence can be defined as the Bayesian posterior probability that the chosen option is correct based on the evidence contributing to it. Here, we used this formal definition as a starting point to develop a normative statistical framework for decision confidence. Our goal was to make general predictions that do not depend on the structure of the noise or a specific algorithm for estimating confidence. We analytically proved several interrelations between statistical decision confidence and observable decision measures, such as evidence discriminability, choice, and accuracy. These interrelationships specify necessary signatures of decision confidence in terms of externally quantifiable variables that can be empirically tested. Our results lay the foundations for a mathematically rigorous treatment of decision confidence that can lead to a common framework for understanding confidence across different research domains, from human and animal behavior to neural representations. PMID:27391683

  4. Action-Specific Disruption of Perceptual Confidence

    PubMed Central

    Maniscalco, Brian; Ko, Yoshiaki; Amendi, Namema; Ro, Tony; Lau, Hakwan

    2015-01-01

    Theoretical models of perception assume that confidence is related to the quality or strength of sensory processing. Counter to this intuitive view, we showed in the present research that the motor system also contributes to judgments of perceptual confidence. In two experiments, we used transcranial magnetic stimulation (TMS) to manipulate response-specific representations in the premotor cortex, selectively disrupting postresponse confidence in visual discrimination judgments. Specifically, stimulation of the motor representation associated with the unchosen response reduced confidence in correct responses, thereby reducing metacognitive capacity without changing visual discrimination performance. Effects of TMS on confidence were observed when stimulation was applied both before and after the response occurred, which suggests that confidence depends on late-stage metacognitive processes. These results place constraints on models of perceptual confidence and metacognition by revealing that action-specific information in the premotor cortex contributes to perceptual confidence. PMID:25425059

  5. Adult age differences in the realism of confidence judgments: overconfidence, format dependence, and cognitive predictors.

    PubMed

    Hansson, Patrik; Rönnlund, Michael; Juslin, Peter; Nilsson, Lars-Göran

    2008-09-01

    Realistic confidence judgments are essential to everyday functioning, but few studies have addressed the issue of age differences in overconfidence. Therefore, the authors examined this issue with probability judgment and intuitive confidence intervals in a sample of 122 healthy adults (ages: 35-40, 55-60, 70-75 years). In line with predictions based on the naïve sampling model (P. Juslin, A. Winman, & P. Hansson, 2007), substantial format dependence was observed, with extreme overconfidence when confidence was expressed as an intuitive confidence interval but not when confidence was expressed as a probability judgment. Moreover, an age-related increase in overconfidence was selectively observed when confidence was expressed as intuitive confidence intervals. Structural equation modeling indicated that the age-related increases in overconfidence were mediated by a general cognitive ability factor that may reflect executive processes. Finally, the results indicated that part of the negative influence of increased age on general ability may be compensated for by an age-related increase in domain-relevant knowledge. PMID:18808243

  6. Young, Black, and Anxious: Describing the Black Student Mathematics Anxiety Research Using Confidence Intervals

    ERIC Educational Resources Information Center

    Young, Jamaal Rashad; Young, Jemimah Lea

    2016-01-01

    In this article, the authors provide a single group summary using the Mathematics Anxiety Rating Scale (MARS) to characterize and delineate the measurement of mathematics anxiety (MA) reported among Black students. Two research questions are explored: (a) What are the characteristics of studies administering the MARS and its derivatives to…

  7. An alternative approach to confidence interval estimation for the win ratio statistic.

    PubMed

    Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann

    2015-03-01

    Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes.

  8. Applying Tests of Equivalence for Multiple Group Comparisons: Demonstration of the Confidence Interval Approach

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2011-01-01

    Assessing the comparability of different groups is an issue facing many researchers and evaluators in a variety of settings. Commonly, null hypothesis significance testing (NHST) is incorrectly used to demonstrate comparability when a non-significant result is found. This is problematic because a failure to find a difference between groups is not…

  9. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  10. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  11. Improving Content Validation Studies Using an Asymmetric Confidence Interval for the Mean of Expert Ratings

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Miller, Jeffrey M.

    2004-01-01

    As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…

  12. Computing confidence intervals on solution costs for stochastic grid generation expansion problems.

    SciTech Connect

    Woodruff, David L..; Watson, Jean-Paul

    2010-12-01

    A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.

  13. Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife

    PubMed Central

    Wager, Stefan; Hastie, Trevor; Efron, Bradley

    2014-01-01

    We study the variability of predictions made by bagged learners and random forests, and show how to estimate standard errors for these methods. Our work builds on variance estimates for bagging proposed by Efron (1992, 2013) that are based on the jackknife and the infinitesimal jackknife (IJ). In practice, bagged predictors are computed using a finite number B of bootstrap replicates, and working with a large B can be computationally expensive. Direct applications of jackknife and IJ estimators to bagging require B = Θ(n1.5) bootstrap replicates to converge, where n is the size of the training set. We propose improved versions that only require B = Θ(n) replicates. Moreover, we show that the IJ estimator requires 1.7 times less bootstrap replicates than the jackknife to achieve a given accuracy. Finally, we study the sampling distributions of the jackknife and IJ variance estimates themselves. We illustrate our findings with multiple experiments and simulation studies. PMID:25580094

  14. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

    ERIC Educational Resources Information Center

    Capraro, Robert M.

    2004-01-01

    With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

  15. Bootstrap Standard Error and Confidence Intervals for the Difference between Two Squared Multiple Correlation Coefficients

    ERIC Educational Resources Information Center

    Chan, Wai

    2009-01-01

    A typical question in multiple regression analysis is to determine if a set of predictors gives the same degree of predictor power in two different populations. Olkin and Finn (1995) proposed two asymptotic-based methods for testing the equality of two population squared multiple correlations, [rho][superscript 2][subscript 1] and…

  16. A Comparison of Composite Reliability Estimators: Coefficient Omega Confidence Intervals in the Current Literature

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2016-01-01

    Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…

  17. Bootstrap Standard Error and Confidence Intervals for the Correlation Corrected for Range Restriction: A Simulation Study

    ERIC Educational Resources Information Center

    Chan, Wai; Chan, Daniel W.-L.

    2004-01-01

    The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ?, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, r-sub(c), has been recommended, and a standard formula based on asymptotic results for estimating its standard…

  18. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  19. Considering Teaching History and Calculating Confidence Intervals in Student Evaluations of Teaching Quality

    ERIC Educational Resources Information Center

    Fraile, Rubén; Bosch-Morell, Francisco

    2015-01-01

    Lecturer promotion and tenure decisions are critical both for university management and for the affected lecturers. Therefore, they should be made cautiously and based on reliable information. Student evaluations of teaching quality are among the most used and analysed sources of such information. However, to date little attention has been paid in…

  20. A recipe for the construction of confidence limits

    SciTech Connect

    Iain A Bertram et al.

    2000-04-12

    In this note, the authors present the recipe recommended by the Search Limits Committee for the construction of confidence intervals for the use of D0 collaboration. In another note, currently in preparation, they present the rationale for this recipe, a critique of the current literature on this topic, and several examples of the use of the method. This note is intended to fill the need of the collaboration to have a reference available until the more complete note is finished. Section 2 introduces the notation used in this note, and Section 3 contains the suggested recipe.

  1. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  2. Confidence in Parenting: Is Parent Education Working?

    ERIC Educational Resources Information Center

    Stanberry, J. Phillip; Stanberry, Anne M.

    This study examined parents' feelings of confidence in their parenting ability among 56 individuals enrolled in 5 parent education programs in Mississippi, hypothesizing that there would be significant correlations between personal authority in the family system and a parent's confidence in performing the various roles of parenting. Based on…

  3. Preservice Educators' Confidence in Addressing Sexuality Education

    ERIC Educational Resources Information Center

    Wyatt, Tammy Jordan

    2009-01-01

    This study examined 328 preservice educators' level of confidence in addressing four sexuality education domains and 21 sexuality education topics. Significant differences in confidence levels across the four domains were found for gender, academic major, sexuality education philosophy, and sexuality education knowledge. Preservice educators…

  4. Examining Response Confidence in Multiple Text Tasks

    ERIC Educational Resources Information Center

    List, Alexandra; Alexander, Patricia A.

    2015-01-01

    Students' confidence in their responses to a multiple text-processing task and their justifications for those confidence ratings were investigated. Specifically, 215 undergraduates responded to two academic questions, differing by type (i.e., discrete and open-ended) and by domain (i.e., developmental psychology and astrophysics), using a digital…

  5. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  6. Self-Confidence and Metacognitive Processes

    ERIC Educational Resources Information Center

    Kleitman, Sabina; Stankov, Lazar

    2007-01-01

    This paper examines the nature of the Self-confidence factor. In particular, we study the relationship between this factor and cognitive, metacognitive, and personality measures. Participants (N=296) were administered a battery of seven cognitive tests that assess three constructs: accuracy, speed, and confidence. Participants were also given the…

  7. Confidence and Competence with Mathematical Procedures

    ERIC Educational Resources Information Center

    Foster, Colin

    2016-01-01

    Confidence assessment (CA), in which students state alongside each of their answers a confidence level expressing how certain they are, has been employed successfully within higher education. However, it has not been widely explored with school pupils. This study examined how school mathematics pupils (N?=?345) in five different secondary schools…

  8. Confidence Wagering during Mathematics and Science Testing

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Hoan-Lin; Shymansky, James A.

    2009-01-01

    This proposal presents the results of a case study involving five 8th grade Taiwanese classes, two mathematics and three science classes. These classes used a new method of testing called confidence wagering. This paper advocates the position that confidence wagering can predict the accuracy of a student's test answer selection during…

  9. Similarity and confidence in artificial grammar learning.

    PubMed

    Tunney, Richard J

    2010-01-01

    Three experiments examined the relationship between similarity ratings and confidence ratings in artificial grammar learning. In Experiment 1 participants rated the similarity of test items to study exemplars. Regression analyses revealed these to be related to some of the objective measures of similarity that have previously been implicated in categorization decisions. In Experiment 2 participants made grammaticality decisions and rated either their confidence in the accuracy of their decisions or the similarity of the test items to the study items. Regression analyses showed that the grammaticality decisions were predicted by the similarity ratings obtained in Experiment 1. Points on the receiver operating characteristics (ROC) curves for the similarity and confidence ratings were closely matched. These data suggest that meta-cognitive judgments of confidence are predicated on structural knowledge of similarity. Experiment 3 confirmed this by showing that confidence ratings to median similarity probe items changed according to the similarity of preceding items.

  10. An informative confidence metric for ATR.

    SciTech Connect

    Bow, Wallace Johnston Jr.; Richards, John Alfred; Bray, Brian Kenworthy

    2003-03-01

    Automatic or assisted target recognition (ATR) is an important application of synthetic aperture radar (SAR). Most ATR researchers have focused on the core problem of declaration-that is, detection and identification of targets of interest within a SAR image. For ATR declarations to be of maximum value to an image analyst, however, it is essential that each declaration be accompanied by a reliability estimate or confidence metric. Unfortunately, the need for a clear and informative confidence metric for ATR has generally been overlooked or ignored. We propose a framework and methodology for evaluating the confidence in an ATR system's declarations and competing target hypotheses. Our proposed confidence metric is intuitive, informative, and applicable to a broad class of ATRs. We demonstrate that seemingly similar ATRs may differ fundamentally in the ability-or inability-to identify targets with high confidence.

  11. Updating misconceptions: effects of age and confidence.

    PubMed

    Cyr, Andrée-Ann; Anderson, Nicole D

    2013-06-01

    Young adults are more likely to correct an initial higher confidence error than a lower confidence error (Butterfield & Metcalfe, 2001). This hypercorrection effect has never been investigated among older adults, although features of the standard paradigm (free recall, metacognitive judgments) and prior evidence of age-related error resolution deficits (see Clare & Jones, 2008) suggest that they may not show this effect. In Study 1, we used free recall and a 7-point confidence scale; in Study 2, we used multiple-choice questions, and participants indicated how many alternatives they had narrowed their options down to prior to answering. In both studies, younger and older adults showed a hypercorrection effect, and this effect was equivalent between groups in Study 2 when free recall and explicit confidence ratings were not required. These results are consistent with our previous work (Cyr & Anderson, 2012) showing that older adults can successfully resolve learning errors when the learning context provides sufficient support.

  12. Confidence regions of planar cardiac vectors

    NASA Technical Reports Server (NTRS)

    Dubin, S.; Herr, A.; Hunt, P.

    1980-01-01

    A method for plotting the confidence regions of vectorial data obtained in electrocardiology is presented. The 90%, 95% and 99% confidence regions of cardiac vectors represented in a plane are obtained in the form of an ellipse centered at coordinates corresponding to the means of a sample selected at random from a bivariate normal distribution. An example of such a plot for the frontal plane QRS mean electrical axis for 80 horses is also presented.

  13. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  14. Confidence in leadership among the newly qualified.

    PubMed

    Bayliss-Pratt, Lisa; Morley, Mary; Bagley, Liz; Alderson, Steven

    2013-10-23

    The Francis report highlighted the importance of strong leadership from health professionals but it is unclear how prepared those who are newly qualified feel to take on a leadership role. We aimed to assess the confidence of newly qualified health professionals working in the West Midlands in the different competencies of the NHS Leadership Framework. Most respondents felt confident in their abilities to demonstrate personal qualities and work with others, but less so at managing or improving services or setting direction.

  15. Worse than enemies. The CEO's destructive confidant.

    PubMed

    Sulkowicz, Kerry J

    2004-02-01

    The CEO is often the most isolated and protected employee in the organization. Few leaders, even veteran CEOs, can do the job without talking to someone about their experiences, which is why most develop a close relationship with a trusted colleague, a confidant to whom they can tell their thoughts and fears. In his work with leaders, the author has found that many CEO-confidant relationships function very well. The confidants keep their leaders' best interests at heart. They derive their gratification vicariously, through the help they provide rather than through any personal gain, and they are usually quite aware that a person in their position can potentially abuse access to the CEO's innermost secrets. Unfortunately, almost as many confidants will end up hurting, undermining, or otherwise exploiting CEOs when the executives are at their most vulnerable. These confidants rarely make the headlines, but behind the scenes they do enormous damage to the CEO and to the organization as a whole. What's more, the leader is often the last one to know when or how the confidant relationship became toxic. The author has identified three types of destructive confidants. The reflector mirrors the CEO, constantly reassuring him that he is the "fairest CEO of them all." The insulator buffers the CEO from the organization, preventing critical information from getting in or out. And the usurper cunningly ingratiates himself with the CEO in a desperate bid for power. This article explores how the CEO-confidant relationship plays out with each type of adviser and suggests ways CEOs can avoid these destructive relationships.

  16. Interval arithmetic operations for uncertainty analysis with correlated interval variables

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Fu, Chun-Ming; Ni, Bing-Yu; Han, Xu

    2016-08-01

    A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analysis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional parallelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addition, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.

  17. Multichannel interval timer (MINT)

    SciTech Connect

    Kimball, K.B.

    1982-06-01

    A prototype Multichannel INterval Timer (MINT) has been built for measuring signal Time of Arrival (TOA) from sensors placed in blast environments. The MINT is intended to reduce the space, equipment costs, and data reduction efforts associated with traditional analog TOA recording methods, making it more practical to field the large arrays of TOA sensors required to characterize blast environments. This document describes the MINT design features, provides the information required for installing and operating the system, and presents proposed improvements for the next generation system.

  18. Correlations Redux: Asymptotic Confidence Limits for Partial and Squared Multiple Correlations.

    ERIC Educational Resources Information Center

    Graf, Richard G.; Alf, Edward F., Jr.

    1999-01-01

    I. Olkin and J. Finn (1995) developed expressions for confidence intervals for functions of simple, partial, and multiple correlations. Describes procedures and computer programs for solving these problems and extending the methods for any number of predictors or for partialing out any number of variables. (Author/SLD)

  19. Interval-valued random functions and the kriging of intervals

    SciTech Connect

    Diamond, P.

    1988-04-01

    Estimation procedures using data that include some values known to lie within certain intervals are usually regarded as problems of constrained optimization. A different approach is used here. Intervals are treated as elements of a positive cone, obeying the arithmetic of interval analysis, and positive interval-valued random functions are discussed. A kriging formalism for interval-valued data is developed. It provides estimates that are themselves intervals. In this context, the condition that kriging weights be positive is seen to arise in a natural way. A numerical example is given, and the extension to universal kriging is sketched.

  20. Notes on interval estimation of the gamma correlation under stratified random sampling.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2012-07-01

    We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. PMID:22622622

  1. Confidence in biopreparedness authorities among Finnish conscripts.

    PubMed

    Vartti, Anne-Marie; Aro, Arja R; Jormanainen, Vesa; Henriksson, Markus; Nikkari, Simo

    2010-08-01

    A large sample of Finnish military conscripts of the armored brigade were questioned on the extent to which they trusted the information given biopreparedness authorities (such as the police, military, health care, and public health institutions) and how confident they were in the authority's ability to protect the public during a potential infectious disease outbreak, from either natural or deliberate causes. Participants answered a written questionnaire during their initial health inspection in July 2007. From a total of 1,000 conscripts, 953 male conscripts returned the questionnaire. The mean sum scores for confidence in the information given to biopreparedness authorities and the media on natural and bioterrorism-related outbreaks (range = 0-30) were 20.14 (SD = 7.79) and 20.12 (SD = 7.69), respectively. Mean sum scores for the respondents' confidence in the ability of the biopreparedness authorities to protect the public during natural and bioterrorism-related outbreaks (range 0-25) were 16.04 (SD = 5.78) and 16.17 (SD = 5.89). Most respondents indicated that during a natural outbreak, they would have confidence in information provided by a health care institution such as central hospitals and primary health care centers, whereas in the case of bioterrorism, the respondents indicated that they would have confidence in the defense forces and central hospitals. PMID:20731266

  2. Chiropractic Interns' Perceptions of Stress and Confidence

    PubMed Central

    Spegman, Adele Mattinat; Herrin, Sean

    2007-01-01

    Objective: Psychological stress has been shown to influence learning and performance among medical and graduate students. Few studies have examined psychological stress in chiropractic students and interns. This preliminary study explored interns' perceptions around stress and confidence at the midpoint of professional training. Methods: This pilot study used a mixed-methods approach, combining rating scales and modified qualitative methods, to explore interns' lived experience. Eighty-eight interns provided ratings of stress and confidence and narrative responses to broad questions. Results: Participants reported multiple sources of stress; stress and confidence ratings were inversely related. Interns described stress as forced priorities, inadequate time, and perceptions of weak performance. Two themes, “convey respect” and “guide real-world learning,” describe faculty actions that minimized stress and promoted confidence. Conclusion: Chiropractic interns experience varying degrees of stress, which is managed with diverse strategies. The development of confidence appears to be influenced by the consistency and manner in which feedback is provided. Although faculty cannot control the amount or sources of stress, awareness of interns' perceptions can strengthen our effectiveness as educators. PMID:18483584

  3. Adaptive Confidence Bands for Nonparametric Regression Functions

    PubMed Central

    Cai, T. Tony; Low, Mark; Ma, Zongming

    2014-01-01

    A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661

  4. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    PubMed

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  5. Current Developments in Measuring Academic Behavioural Confidence

    ERIC Educational Resources Information Center

    Sander, Paul

    2009-01-01

    Using published findings and by further analyses of existing data, the structure, validity and utility of the Academic Behavioural Confidence scale (ABC) is critically considered. Validity is primarily assessed through the scale's relationship with other existing scales as well as by looking for predicted differences. The utility of the ABC scale…

  6. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  7. The Confidence Factor in Liberal Education

    ERIC Educational Resources Information Center

    Gordon, Daniel

    2012-01-01

    With the US unemployment rate at 9 percent, it's rational for college students to lose confidence in the liberal arts and to opt for a vocational major. Or is it? There is a compelling economic case for the liberal arts. Against those who call for more professional training, liberal educators should concede nothing. However, they do have a…

  8. Mixed Confidence Estimation for Iterative CT Reconstruction.

    PubMed

    Perlmutter, David S; Kim, Soo Mee; Kinahan, Paul E; Alessio, Adam M

    2016-09-01

    Dynamic (4D) CT imaging is used in a variety of applications, but the two major drawbacks of the technique are its increased radiation dose and longer reconstruction time. Here we present a statistical analysis of our previously proposed Mixed Confidence Estimation (MCE) method that addresses both these issues. This method, where framed iterative reconstruction is only performed on the dynamic regions of each frame while static regions are fixed across frames to a composite image, was proposed to reduce computation time. In this work, we generalize the previous method to describe any application where a portion of the image is known with higher confidence (static, composite, lower-frequency content, etc.) and a portion of the image is known with lower confidence (dynamic, targeted, etc). We show that by splitting the image space into higher and lower confidence components, MCE can lower the estimator variance in both regions compared to conventional reconstruction. We present a theoretical argument for this reduction in estimator variance and verify this argument with proof-of-principle simulations. We also propose a fast approximation of the variance of images reconstructed with MCE and confirm that this approximation is accurate compared to analytic calculations of and multi-realization image variance. This MCE method requires less computation time and provides reduced image variance for imaging scenarios where portions of the image are known with more certainty than others allowing for potentially reduced radiation dose and/or improved dynamic imaging. PMID:27008663

  9. Sources of Confidence in School Community Councils

    ERIC Educational Resources Information Center

    Nygaard, Richard

    2010-01-01

    Three Utah middle level school community councils participated in a qualitative strengths-based process evaluation. Two of the school community councils were identified as exemplary, and the third was just beginning to function. One aspect of the evaluation was the source of school community council members' confidence. Each school had unique…

  10. Evaluating Measures of Optimism and Sport Confidence

    ERIC Educational Resources Information Center

    Fogarty, Gerard J.; Perera, Harsha N.; Furst, Andrea J.; Thomas, Patrick R.

    2016-01-01

    The psychometric properties of the Life Orientation Test-Revised (LOT-R), the Sport Confidence Inventory (SCI), and the Carolina SCI (CSCI) were examined in a study involving 260 athletes. The study aimed to test the dimensional structure, convergent and divergent validity, and invariance over competition level of scores generated by these…

  11. The Effect of Adaptive Confidence Strategies in Computer-Assisted Instruction on Learning and Learner Confidence

    ERIC Educational Resources Information Center

    Warren, Richard Daniel

    2012-01-01

    The purpose of this research was to investigate the effects of including adaptive confidence strategies in instructionally sound computer-assisted instruction (CAI) on learning and learner confidence. Seventy-one general educational development (GED) learners recruited from various GED learning centers at community colleges in the southeast United…

  12. Random selection as a confidence building tool

    SciTech Connect

    Macarthur, Duncan W; Hauck, Danielle; Langner, Diana; Thron, Jonathan; Smith, Morag; Williams, Richard

    2010-01-01

    Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. The first concern can be addressed by performing the measurements within the host facility using instruments under the host's control. Because the data output in this measurement scenario is also under host control, it is difficult for the monitoring party to have confidence in that data. One technique for addressing this difficulty is random selection. The concept of random selection can be thought of as four steps: (1) The host presents several 'identical' copies of a component or system to the monitor. (2) One (or more) of these copies is randomly chosen by the monitors for use in the measurement system. (3) Similarly, one or more is randomly chosen to be validated further at a later date in a monitor-controlled facility. (4) Because the two components or systems are identical, validation of the 'validation copy' is equivalent to validation of the measurement system. This procedure sounds straightforward, but effective application may be quite difficult. Although random selection is often viewed as a panacea for confidence building, the amount of confidence generated depends on the monitor's continuity of knowledge for both validation and measurement systems. In this presentation, we will discuss the random selection technique, as well as where and how this technique might be applied to generate maximum confidence. In addition, we will discuss the role of modular measurement-system design in facilitating random selection and describe a simple modular measurement system incorporating six small {sup 3}He neutron detectors and a single high-purity germanium gamma detector.

  13. A neural-fuzzy model with confidence measure for controlled stressed-lap surface shape presentation

    NASA Astrophysics Data System (ADS)

    Chen, Minyou; Wan, Yongjian; Wu, Fan; Xie, Kaigui; Wang, Mingyu; Fan, Bin

    2009-05-01

    In computer controlled large aspheric mirror polishing process, it is crucially important to build an accurate stressed-lap surface model for shape control. It is desirable to provide a practical measure of prediction confidence to access the reliability of the resulting models. To build a reliable prediction model for representing the surface shape of stressed lap polishing process in large aperture and highly aspheric optical surface, this paper proposed a predictive model with its own confidence interval estimate based on a fuzzy neural network. The calculation of confidence interval accounts for the training data distribution and accuracy of the trained model with the given input-output data. Simulation results show that the proposed confidence interval estimation reflects the data distribution and extrapolation correctly, and works well in high-dimensional sparse data set of the detected stressed lap surface shape changes. The original data from the micro-displacement sensor matrix were used to train the neural network model. The experiment results showed that the proposed model can represent the surface shape of the stressed-lap accurately and facilitate the computer controlled optical polishing process.

  14. Computation of the intervals of uncertainties about the parameters found for identification

    NASA Technical Reports Server (NTRS)

    Mereau, P.; Raymond, J.

    1982-01-01

    A modeling method to calculate the intervals of uncertainty for parameters found by identification is described. The region of confidence and the general approach to the calculation of these intervals are discussed. The general subprograms for determination of dimensions are described. They provide the organizational charts for the subprograms, the tests carried out and the listings of the different subprograms.

  15. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  16. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  17. A variance based confidence criterion for ERA identified modal parameters. [Eigensystem Realization Algorithm

    NASA Technical Reports Server (NTRS)

    Longman, Richard W.; Juang, Jer-Nan

    1988-01-01

    The realization theory is developed in a systematic manner for the Eigensystem Realization Algorithm (ERA) used for system identification. First, perturbation results are obtained which describe the linearized changes in the identified parameters resulting from small change in the data. Formulas are then derived that can be used to evaluate the variance of each of the identified parameters, assuming that the noise level is sufficiently low to allow the application of linearized results. These variances can be converted to give confidence intervals for each of the parameters for any chosen confidence level.

  18. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  19. Confidence as Bayesian Probability: From Neural Origins to Behavior.

    PubMed

    Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F

    2015-10-01

    Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. PMID:26447574

  20. On the Confidence Limit of Hilbert Spectrum

    NASA Technical Reports Server (NTRS)

    Huang, Norden

    2003-01-01

    Confidence limit is a routine requirement for Fourier spectral analysis. But this confidence limit is established based on ergodic theory: For stationary process, temporal average equals the ensemble average. Therefore, one can divide the data into n-sections and treat each section as independent realization. Most natural processes in general, and climate data in particular, are not stationary; therefore, there is a need for the Hilbert Spectral analysis for such processes. Here ergodic theory is no longer applicable. We propose to use various adjustable parameters in the shifting processes of the Empirical Mode Decomposition (EMD) method to obtain an ensemble of Intrinsic Mode Function 0 sets. Based on such an ensemble, we introduce a statistical measure in. a form of confidence limits for the Intrinsic Mode Functions, and consequently, the Hilbert spectra. The criterion of selecting the various adjustable parameters is based on the orthogonality test of the resulting M F sets. Length-of-day data from 1962 to 2001 will be used to illustrate this new approach. Its implication in climate data analysis will also be discussed.

  1. Confidence-Based Learning in Investment Analysis

    NASA Astrophysics Data System (ADS)

    Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

    The aim of this study is to determine the effectiveness of using multiple choice tests in subjects related to the administration and business management. To this end we used a multiple-choice test with specific questions to verify the extent of knowledge gained and the confidence and trust in the answers. The tests were performed in a group of 200 students at the bachelor's degree in Business Administration and Management. The analysis made have been implemented in one subject of the scope of investment analysis and measured the level of knowledge gained and the degree of trust and security in the responses at two different times of the course. The measurements have been taken into account different levels of difficulty in the questions asked and the time spent by students to complete the test. The results confirm that students are generally able to obtain more knowledge along the way and get increases in the degree of trust and confidence in the answers. It is confirmed as the difficulty level of the questions set a priori by the heads of the subjects are related to levels of security and confidence in the answers. It is estimated that the improvement in the skills learned is viewed favourably by businesses and are especially important for job placement of students.

  2. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  3. The Confidence Information Ontology: a step towards a standard for asserting confidence in annotations.

    PubMed

    Bastian, Frederic B; Chibucos, Marcus C; Gaudet, Pascale; Giglio, Michelle; Holliday, Gemma L; Huang, Hong; Lewis, Suzanna E; Niknejad, Anne; Orchard, Sandra; Poux, Sylvain; Skunca, Nives; Robinson-Rechavi, Marc

    2015-01-01

    Biocuration has become a cornerstone for analyses in biology, and to meet needs, the amount of annotations has considerably grown in recent years. However, the reliability of these annotations varies; it has thus become necessary to be able to assess the confidence in annotations. Although several resources already provide confidence information about the annotations that they produce, a standard way of providing such information has yet to be defined. This lack of standardization undermines the propagation of knowledge across resources, as well as the credibility of results from high-throughput analyses. Seeded at a workshop during the Biocuration 2012 conference, a working group has been created to address this problem. We present here the elements that were identified as essential for assessing confidence in annotations, as well as a draft ontology--the Confidence Information Ontology--to illustrate how the problems identified could be addressed. We hope that this effort will provide a home for discussing this major issue among the biocuration community. Tracker URL: https://github.com/BgeeDB/confidence-information-ontology Ontology URL: https://raw.githubusercontent.com/BgeeDB/confidence-information-ontology/master/src/ontology/cio-simple.obo

  4. Contraceptive confidence and timing of first birth in Moldova: an event history analysis of retrospective data

    PubMed Central

    Lyons-Amos, Mark; Padmadas, Sabu S; Durrant, Gabriele B

    2014-01-01

    Objectives To test the contraceptive confidence hypothesis in a modern context. The hypothesis is that women using effective or modern contraceptive methods have increased contraceptive confidence and hence a shorter interval between marriage and first birth than users of ineffective or traditional methods. We extend the hypothesis to incorporate the role of abortion, arguing that it acts as a substitute for contraception in the study context. Setting Moldova, a country in South-East Europe. Moldova exhibits high use of traditional contraceptive methods and abortion compared with other European countries. Participants Data are from a secondary analysis of the 2005 Moldovan Demographic and Health Survey, a nationally representative sample survey. 5377 unmarried women were selected. Primary and secondary outcome measures The outcome measure was the interval between marriage and first birth. This was modelled using a piecewise-constant hazard regression, with abortion and contraceptive method types as primary variables along with relevant sociodemographic controls. Results Women with high contraceptive confidence (modern method users) have a higher cumulative hazard of first birth 36 months following marriage (0.88 (0.87 to 0.89)) compared with women with low contraceptive confidence (traditional method users, cumulative hazard: 0.85 (0.84 to 0.85)). This is consistent with the contraceptive confidence hypothesis. There is a higher cumulative hazard of first birth among women with low (0.80 (0.79 to 0.80)) and moderate abortion propensities (0.76 (0.75 to 0.77)) than women with no abortion propensity (0.73 (0.72 to 0.74)) 24 months after marriage. Conclusions Effective contraceptive use tends to increase contraceptive confidence and is associated with a shorter interval between marriage and first birth. Increased use of abortion also tends to increase contraceptive confidence and shorten birth duration, although this effect is non-linear—women with a very high

  5. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  6. Engineering Student Self-Assessment through Confidence-Based Scoring

    ERIC Educational Resources Information Center

    Yuen-Reed, Gigi; Reed, Kyle B.

    2015-01-01

    A vital aspect of an answer is the confidence that goes along with it. Misstating the level of confidence one has in the answer can have devastating outcomes. However, confidence assessment is rarely emphasized during typical engineering education. The confidence-based scoring method described in this study encourages students to both think about…

  7. European security, nuclear weapons and public confidence

    SciTech Connect

    Gutteridge, W.

    1982-01-01

    This book presents papers on nuclear arms control in Europe. Topics considered include political aspects, the balance of power, nuclear disarmament in Europe, the implications of new conventional technologies, the neutron bomb, theater nuclear weapons, arms control in Northern Europe, naval confidence-building measures in the Baltic, the strategic balance in the Arctic Ocean, Arctic resources, threats to European stability, developments in South Africa, economic cooperation in Europe, European collaboration in science and technology after Helsinki, European cooperation in the area of electric power, and economic cooperation as a factor for the development of European security and cooperation.

  8. Confidence and conflicts of duty in surgery.

    PubMed

    Coggon, John; Wheeler, Robert

    2010-03-01

    This paper offers an exploration of the right to confidentiality, considering the moral importance of private information. It is shown that the legitimate value that individuals derive from confidentiality stems from the public interest. It is re-assuring, therefore, that public interest arguments must be made to justify breaches of confidentiality. The General Medical Council's guidance gives very high importance to duties to maintain confidences, but also rightly acknowledges that, at times, there are more important duties that must be met. Nevertheless, this potential conflict of obligations may place the surgeon in difficult clinical situations, and examples of these are described, together with suggestions for resolution. PMID:20353640

  9. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  10. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines

    PubMed Central

    Gilkey, Melissa B.; McRee, Annie-Laurie; Magnus, Brooke E.; Reiter, Paul L.; Dempsey, Amanda F.; Brewer, Noel T.

    2016-01-01

    Objective To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents. Methods We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children’s vaccination status for vaccines including measles, mumps, and rubella (MMR), varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents’ mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. Results A substantial minority of parents reported a history of vaccine refusal (15%) or delay (27%). Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54–0.63) as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76–0.86). Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40–1.68), varicella (OR = 1.54, 95% CI, 1.42–1.66), and flu vaccines (OR = 1.32, 95% CI, 1.23–1.42). Conclusions Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children. PMID:27391098

  11. Multiple interval mapping for quantitative trait loci.

    PubMed Central

    Kao, C H; Zeng, Z B; Teasdale, R D

    1999-01-01

    A new statistical method for mapping quantitative trait loci (QTL), called multiple interval mapping (MIM), is presented. It uses multiple marker intervals simultaneously to fit multiple putative QTL directly in the model for mapping QTL. The MIM model is based on Cockerham's model for interpreting genetic parameters and the method of maximum likelihood for estimating genetic parameters. With the MIM approach, the precision and power of QTL mapping could be improved. Also, epistasis between QTL, genotypic values of individuals, and heritabilities of quantitative traits can be readily estimated and analyzed. Using the MIM model, a stepwise selection procedure with likelihood ratio test statistic as a criterion is proposed to identify QTL. This MIM method was applied to a mapping data set of radiata pine on three traits: brown cone number, tree diameter, and branch quality scores. Based on the MIM result, seven, six, and five QTL were detected for the three traits, respectively. The detected QTL individually contributed from approximately 1 to 27% of the total genetic variation. Significant epistasis between four pairs of QTL in two traits was detected, and the four pairs of QTL contributed approximately 10.38 and 14.14% of the total genetic variation. The asymptotic variances of QTL positions and effects were also provided to construct the confidence intervals. The estimated heritabilities were 0.5606, 0.5226, and 0. 3630 for the three traits, respectively. With the estimated QTL effects and positions, the best strategy of marker-assisted selection for trait improvement for a specific purpose and requirement can be explored. The MIM FORTRAN program is available on the worldwide web (http://www.stat.sinica.edu.tw/chkao/). PMID:10388834

  12. Diagnosing Anomalous Network Performance with Confidence

    SciTech Connect

    Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  13. Towards Measurement of Confidence in Safety Cases

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Paim Ganesh J.; Habli, Ibrahim

    2011-01-01

    Arguments in safety cases are predominantly qualitative. This is partly attributed to the lack of sufficient design and operational data necessary to measure the achievement of high-dependability targets, particularly for safety-critical functions implemented in software. The subjective nature of many forms of evidence, such as expert judgment and process maturity, also contributes to the overwhelming dependence on qualitative arguments. However, where data for quantitative measurements is systematically collected, quantitative arguments provide far more benefits over qualitative arguments, in assessing confidence in the safety case. In this paper, we propose a basis for developing and evaluating integrated qualitative and quantitative safety arguments based on the Goal Structuring Notation (GSN) and Bayesian Networks (BN). The approach we propose identifies structures within GSN-based arguments where uncertainties can be quantified. BN are then used to provide a means to reason about confidence in a probabilistic way. We illustrate our approach using a fragment of a safety case for an unmanned aerial system and conclude with some preliminary observations

  14. Subjective probability intervals: how to reduce overconfidence by interval evaluation.

    PubMed

    Winman, Anders; Hansson, Patrik; Juslin, Peter

    2004-11-01

    Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost abolished when they evaluate the probability that the same intervals include the quantity. The authors successfully apply a method for adaptive adjustment of probability intervals as a debiasing tool and discuss a tentative explanation in terms of a naive sampling model. According to this view, people report their experiences accurately, but they are naive in that they treat both sample proportion and sample dispersion as unbiased estimators, yielding small bias in probability evaluation but strong bias in interval production. PMID:15521796

  15. Confidence and rejection in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Colton, Larry Don

    Automatic speech recognition (ASR) is performed imperfectly by computers. For some designated part (e.g., word or phrase) of the ASR output, rejection is deciding (yes or no) whether it is correct, and confidence is the probability (0.0 to 1.0) of it being correct. This thesis presents new methods of rejecting errors and estimating confidence for telephone speech. These are also called word or utterance verification and can be used in wordspotting or voice-response systems. Open-set or out-of-vocabulary situations are a primary focus. Language models are not considered. In vocabulary-dependent rejection all words in the target vocabulary are known in advance and a strategy can be developed for confirming each word. A word-specific artificial neural network (ANN) is shown to discriminate well, and scores from such ANNs are shown on a closed-set recognition task to reorder the N-best hypothesis list (N=3) for improved recognition performance. Segment-based duration and perceptual linear prediction (PLP) features are shown to perform well for such ANNs. The majority of the thesis concerns vocabulary- and task-independent confidence and rejection based on phonetic word models. These can be computed for words even when no training examples of those words have been seen. New techniques are developed using phoneme ranks instead of probabilities in each frame. These are shown to perform as well as the best other methods examined despite the data reduction involved. Certain new weighted averaging schemes are studied but found to give no performance benefit. Hierarchical averaging is shown to improve performance significantly: frame scores combine to make segment (phoneme state) scores, which combine to make phoneme scores, which combine to make word scores. Use of intermediate syllable scores is shown to not affect performance. Normalizing frame scores by an average of the top probabilities in each frame is shown to improve performance significantly. Perplexity of the wrong

  16. Anaesthetists' knowledge of the QT interval in a teaching hospital.

    PubMed

    Marshall, S D; Myles, P S

    2005-12-01

    Many drugs used in anaesthesia may prolong the QT interval of the electrocardiogram (ECG), and recent U.S. Food and Drug Administration guidelines mandate monitoring of the ECG before, during and after droperidol administration. We surveyed 41 trainee and consultant anaesthetists in our Department to determine current practice and knowledge of the QT interval to investigate if this is a practical proposition. A response rate of 98% (40/41) was obtained. The majority of respondents expressed moderate to high levels of confidence in interpreting the ECG, and this was related to years of training (rho 0.36, P=0.024). A total of 27 respondents (65%) were able to correctly identify the QT interval on a schematic representation of the ECG, trainees 70% vs consultants 60%, P=0.51. When asked to name drugs that altered the QT interval, droperidol was included by 11 of the 40 respondents (28%); trainees 10% vs consultants 45%, OR 7.4 (95% CI: 1.3-41), P=0.013. Torsades de Pointes was correctly identified as a possible consequence of a prolonged QT interval by 65% of trainees and 70% of consultants, P=0.83. The results suggest that QT interval measurement is not widely practised by anaesthetists, although its clinical significance is well known, and interpretation would be unreliable without further education.

  17. Modal confidence factor in vibration testing

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. R.

    1978-01-01

    The modal confidence factor (MCF) is a number calculated for every identified mode for a structure under test. The MCF varies from 0.00 for a distorted nonlinear, or noise mode to 100.0 for a pure structural mode. The theory of the MCF is based on the correlation that exists between the modal deflection at a certain station and the modal deflection at the same station delayed in time. The theory and application of the MCF are illustrated by two experiments. The first experiment deals with simulated responses from a two-degree-of-freedom system with 20%, 40%, and 100% noise added. The second experiment was run on a generalized payload model. The free decay response from the payload model contained 22% noise.

  18. Sample sizes for confidence limits for reliability.

    SciTech Connect

    Darby, John L.

    2010-02-01

    We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.

  19. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  20. Simple Interval Timers for Microcomputers.

    ERIC Educational Resources Information Center

    McInerney, M.; Burgess, G.

    1985-01-01

    Discusses simple interval timers for microcomputers, including (1) the Jiffy clock; (2) CPU count timers; (3) screen count timers; (4) light pen timers; and (5) chip timers. Also examines some of the general characteristics of all types of timers. (JN)

  1. The 2009 Retirement Confidence Survey: economy drives confidence to record lows; many looking to work longer.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2009-04-01

    RECORD LOW CONFIDENCE LEVELS: Workers who say they are very confident about having enough money for a comfortable retirement this year hit the lowest level in 2009 (13 percent) since the Retirement Confidence Survey started asking the question in 1993, continuing a two-year decline. Retirees also posted a new low in confidence about having a financially secure retirement, with only 20 percent now saying they are very confident (down from 41 percent in 2007). THE ECONOMY, INFLATION, COST OF LIVING ARE THE BIG CONCERNS: Not surprisingly, workers overall who have lost confidence over the past year about affording a comfortable retirement most often cite the recent economic uncertainty, inflation, and the cost of living as primary factors. In addition, certain negative experiences, such as job loss or a pay cut, loss of retirement savings, or an increase in debt, almost always contribute to loss of confidence among those who experience them. RETIREMENT EXPECTATIONS DELAYED: Workers apparently expect to work longer because of the economic downturn: 28 percent of workers in the 2009 RCS say the age at which they expect to retire has changed in the past year. Of those, the vast majority (89 percent) say that they have postponed retirement with the intention of increasing their financial security. Nevertheless, the median (mid-point) worker expects to retire at age 65, with 21 percent planning to push on into their 70s. The median retiree actually retired at age 62, and 47 percent of retirees say they retired sooner than planned. WORKING IN RETIREMENT: More workers are also planning to supplement their income in retirement by working for pay. The percentage of workers planning to work after they retire has increased to 72 percent in 2009 (up from 66 percent in 2007). This compares with 34 percent of retirees who report they actually worked for pay at some time during their retirement. GREATER WORRY ABOUT BASIC AND HEALTH EXPENSES: Workers who say they very confident in

  2. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  3. Reasons to have confidence in condoms.

    PubMed

    Mcneill, E T

    1998-01-01

    When used regularly and correctly, latex condoms are highly reliable and effective in preventing pregnancy and sexually transmitted disease. However, condoms are not being used as much as they should be, mainly because of negative perceptions among both users and health care providers. The following reasons are presented and discussed as to why people should have more confidence in latex condoms: when used correctly, condoms are highly reliable and effective in preventing pregnancy and sexually transmitted disease; latex condoms provide an impermeable mechanical barrier to bacteria, viruses, and sperm; most users do not break condoms, and a proportion of breakage is preventable; modern condoms are manufactured with considerable precision; use of the proper lubricant improves condom use; condoms in intact foil packages last at least 5 years; and quality control and post-production quality assurance help to ensure the manufacture of a reliable product. While it remains to be determined how accurately the test standards predict results during human use, a combination of tests can provide data upon the quality of condoms in the field.

  4. High-Confidence Quantum Gate Tomography

    NASA Astrophysics Data System (ADS)

    Johnson, Blake; da Silva, Marcus; Ryan, Colm; Kimmel, Shelby; Donovan, Brian; Ohki, Thomas

    2014-03-01

    Debugging and verification of high-fidelity quantum gates requires the development of new tools and protocols to unwrap the performance of the gate from the rest of the sequence. Randomized benchmarking tomography[2] allows one to extract full information of the unital portion of the gate with high confidence. We report experimental confirmation of the technique's applicability to quantum gate tomography. We show that the method is robust to common experimental imperfections such as imperfect single-shot readout and state preparation. We also demonstrate the ability to characterize non-Clifford gates. To assist in the experimental implementation we introduce two techniques. ``Atomic Cliffords'' use phase ramping and frame tracking to allow single-pulse implementation of the full group of single-qubit Clifford gates. Domain specific pulse sequencers allow rapid implementation of the many thousands of sequences needed. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office contract no. W911NF-10-1-0324.

  5. The 2012 Retirement Confidence Survey: job insecurity, debt weigh on retirement confidence, savings.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2012-03-01

    Americans' confidence in their ability to retire comfortably is stagnant at historically low levels. Just 14 percent are very confident they will have enough money to live comfortably in retirement (statistically equivalent to the low of 13 percent measured in 2011 and 2009). Employment insecurity looms large: Forty-two percent identify job uncertainty as the most pressing financial issue facing most Americans today. Worker confidence about having enough money to pay for medical expenses and long-term care expenses in retirement remains well below their confidence levels for paying basic expenses. Many workers report they have virtually no savings and investments. In total, 60 percent of workers report that the total value of their household's savings and investments, excluding the value of their primary home and any defined benefit plans, is less than $25,000. Twenty-five percent of workers in the 2012 Retirement Confidence Survey say the age at which they expect to retire has changed in the past year. In 1991, 11 percent of workers said they expected to retire after age 65, and by 2012 that has grown to 37 percent. Regardless of those retirement age expectations, and consistent with prior RCS findings, half of current retirees surveyed say they left the work force unexpectedly due to health problems, disability, or changes at their employer, such as downsizing or closure. Those already in retirement tend to express higher levels of confidence than current workers about several key financial aspects of retirement. Retirees report they are significantly more reliant on Social Security as a major source of their retirement income than current workers expect to be. Although 56 percent of workers expect to receive benefits from a defined benefit plan in retirement, only 33 percent report that they and/or their spouse currently have such a benefit with a current or previous employer. More than half of workers (56 percent) report they and/or their spouse have not tried

  6. Assessing Undergraduate Students' Conceptual Understanding and Confidence of Electromagnetics

    ERIC Educational Resources Information Center

    Leppavirta, Johanna

    2012-01-01

    The study examines how students' conceptual understanding changes from high confidence with incorrect conceptions to high confidence with correct conceptions when reasoning about electromagnetics. The Conceptual Survey of Electricity and Magnetism test is weighted with students' self-rated confidence on each item in order to infer how strongly…

  7. 49 CFR 1103.23 - Confidences of a client.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to...

  8. 21 CFR 26.37 - Confidence building activities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Confidence building activities. 26.37 Section 26... COMMUNITY Specific Sector Provisions for Medical Devices § 26.37 Confidence building activities. (a) At the beginning of the transitional period, the Joint Sectoral Group will establish a joint confidence...

  9. 49 CFR 1103.23 - Confidences of a client.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 8 2011-10-01 2011-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to...

  10. 7 CFR 97.18 - Applications handled in confidence.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Applications handled in confidence. 97.18 Section 97.18 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING... confidence. (a) Pending applications shall be handled in confidence. Except as provided below, no...

  11. 7 CFR 97.18 - Applications handled in confidence.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Applications handled in confidence. 97.18 Section 97.18 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING... confidence. (a) Pending applications shall be handled in confidence. Except as provided below, no...

  12. Contrasting Academic Behavioural Confidence in Mexican and European Psychology Students

    ERIC Educational Resources Information Center

    Ochoa, Alma Rosa Aguila; Sander, Paul

    2012-01-01

    Introduction: Research with the Academic Behavioural Confidence scale using European students has shown that students have high levels of confidence in their academic abilities. It is generally accepted that people in more collectivist cultures have more realistic confidence levels in contrast to the overconfidence seen in individualistic European…

  13. Does Consumer Confidence Measure Up to the Hype?

    ERIC Educational Resources Information Center

    Griffitts, Dawn

    2003-01-01

    This economic education publication features an article, "Does Consumer Confidence Measure Up to the Hype?," which defines consumer confidence and describes how it is measured. The article also explores why people might pay so much attention to consumer confidence indexes. The document also contains a question and answer section about deflation as…

  14. Subjective Probability Intervals: How to Reduce Overconfidence by Interval Evaluation

    ERIC Educational Resources Information Center

    Winman, Anders; Hansson, Patrik; Juslin, Peter

    2004-01-01

    Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost…

  15. High resolution time interval meter

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

  16. Updating representations of temporal intervals.

    PubMed

    Danckert, James; Anderson, Britt

    2015-12-01

    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings.

  17. Updating representations of temporal intervals.

    PubMed

    Danckert, James; Anderson, Britt

    2015-12-01

    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings. PMID:26303026

  18. On how the brain decodes vocal cues about speaker confidence.

    PubMed

    Jiang, Xiaoming; Pell, Marc D

    2015-05-01

    In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by

  19. Oxygen uptake in maximal effort constant rate and interval running.

    PubMed

    Pratt, Daniel; O'Brien, Brendan J; Clark, Bradley

    2013-01-01

    This study investigated differences in average VO2 of maximal effort interval running to maximal effort constant rate running at lactate threshold matched for time. The average VO2 and distance covered of 10 recreational male runners (VO2max: 4158 ± 390 mL · min(-1)) were compared between a maximal effort constant-rate run at lactate threshold (CRLT), a maximal effort interval run (INT) consisting of 2 min at VO2max speed with 2 minutes at 50% of VO2 repeated 5 times, and a run at the average speed sustained during the interval run (CR submax). Data are presented as mean and 95% confidence intervals. The average VO2 for INT, 3451 (3269-3633) mL · min(-1), 83% VO2max, was not significantly different to CRLT, 3464 (3285-3643) mL · min(-1), 84% VO2max, but both were significantly higher than CR sub-max, 3464 (3285-3643) mL · min(-1), 76% VO2max. The distance covered was significantly greater in CLRT, 4431 (4202-3731) metres, compared to INT and CR sub-max, 4070 (3831-4309) metres. The novel finding was that a 20-minute maximal effort constant rate run uses similar amounts of oxygen as a 20-minute maximal effort interval run despite the greater distance covered in the maximal effort constant-rate run. PMID:24288501

  20. Relating confidence to information uncertainty in qualitative reasoning

    SciTech Connect

    Chavez, Gregory M; Zerkle, David K; Key, Brian P; Shevitz, Daniel W

    2010-12-02

    Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing security risk results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to determine a qualitative measure of confidence in a qualitative reasoning result from the available information uncertainty in the result using membership values in the fuzzy sets of confidence. In this study information uncertainty is quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values. Measured values of information uncertainty in each result is used to obtain the confidence. The determined confidence values are used to compare competing scenarios and understand the influences on the desired result.

  1. Confidence through consensus: a neural mechanism for uncertainty monitoring.

    PubMed

    Paz, Luciano; Insabato, Andrea; Zylberberg, Ariel; Deco, Gustavo; Sigman, Mariano

    2016-02-24

    Models that integrate sensory evidence to a threshold can explain task accuracy, response times and confidence, yet it is still unclear how confidence is encoded in the brain. Classic models assume that confidence is encoded in some form of balance between the evidence integrated in favor and against the selected option. However, recent experiments that measure the sensory evidence's influence on choice and confidence contradict these classic models. We propose that the decision is taken by many loosely coupled modules each of which represent a stochastic sample of the sensory evidence integral. Confidence is then encoded in the dispersion between modules. We show that our proposal can account for the well established relations between confidence, and stimuli discriminability and reaction times, as well as the fluctuations influence on choice and confidence.

  2. Meta-analysis to refine map position and reduce confidence intervals for delayed canopy wilting QTLs in soybean

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Slow canopy wilting in soybean has been identified as a potentially beneficial trait for ameliorating drought effects on yield. Previous research identified QTLs for slow wilting from two different bi-parental populations and this information was combined with data from three other populations to id...

  3. Standard Errors and Confidence Intervals from Bootstrapping for Ramsay-Curve Item Response Theory Model Item Parameters

    ERIC Educational Resources Information Center

    Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.

    2011-01-01

    Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…

  4. Confidence Intervals and "F" Tests for Intraclass Correlation Coefficients Based on Three-Way Mixed Effects Models

    ERIC Educational Resources Information Center

    Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.

    2011-01-01

    Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…

  5. Factorial Based Response Surface Modeling with Confidence Intervals for Optimizing Thermal Optical Transmission Analysis of Atmospheric Black Carbon

    EPA Science Inventory

    We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...

  6. A Direct Method for Obtaining Approximate Standard Error and Confidence Interval of Maximal Reliability for Composites with Congeneric Measures

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2006-01-01

    Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…

  7. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  8. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  9. Curriculum-Based Measurement of Oral Reading: A Preliminary Investigation of Confidence Interval Overlap to Detect Reliable Growth

    ERIC Educational Resources Information Center

    Van Norman, Ethan R.

    2016-01-01

    Curriculum-based measurement of oral reading (CBM-R) progress monitoring data is used to measure student response to instruction. Federal legislation permits educators to use CBM-R progress monitoring data as a basis for determining the presence of specific learning disabilities. However, decision making frameworks originally developed for CBM-R…

  10. Guide for Calculating and Interpreting Effect Sizes and Confidence Intervals in Intellectual and Developmental Disability Research Studies

    ERIC Educational Resources Information Center

    Dunst, Carl J.; Hamby, Deborah W.

    2012-01-01

    This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…

  11. 42 CFR 431.972 - Claims sampling procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS STATE ORGANIZATION AND GENERAL ADMINISTRATION Requirements for...-level error rate within a 3 percent precision level at 95 percent confidence interval for the claims... cycles following the base year: (i) CMS considers the error rate from the State's previous PERM cycle...

  12. Physician-led, hospital-linked, birth care centers can decrease Cesarean section rates without increasing rates of adverse events

    PubMed Central

    O’Hara, Margaret H.; Frazier, Linda M.; Stembridge, Travis W.; McKay, Robert S.; Mohr, Sandra N.; Shalat, Stuart L.

    2015-01-01

    BACKGROUND This study compares outcomes at a hospital-linked, physician-led, birthing center to a traditional hospital labor and delivery service. METHODS Using de-identified electronic medical records, a retrospective cohort design was employed to evaluate 32,174 singleton births during 1998–2005. RESULTS Compared to hospital service, birth care center delivery was associated with a lower rate of cesarean sections (adjusted Relative Risk =0.73, 95 percent confidence interval 0.59–0.91; p<0.001) without an increased rate of operative vaginal delivery (adjusted Relative Risk=1.04, 95 percent confidence interval 0.97–1.13; p=0.25) and a higher initiation of breast feeding (adjusted Relative Risk = 1.28, 95 percent confidence interval 1.25–1.30 (p=<0.001). A maternal length of stay greater than 72 hours occurred less frequently in the birth care center (adjusted Relative Risk =0.60, 95 percent confidence interval 0.55–0.66; p<0.001). Comparing only women without major obstetrical risk factors, the differences in outcomes were reduced but not eliminated. Adverse maternal and infant outcomes were not increased at the birth care center. CONCLUSION A hospital-linked, physician-led, birth care center has the potential to lower rates of cesarean sections without increasing rates of operative vaginal delivery or other adverse maternal and infant outcomes. PMID:24635500

  13. 40 CFR 60.107a - Monitoring of emissions and operations for fuel gas combustion devices and flares.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... produced in the hydrogen plant, catalytic reforming unit, isomerization unit, and HF alkylation process... Association Standard 2377-86, Test for Hydrogen Sulfide and Carbon Dioxide in Natural Gas Using Length of... the 95-percent confidence interval for the distribution of daily ratios basedon the 10...

  14. 40 CFR 60.107a - Monitoring of emissions and operations for fuel gas combustion devices and flares.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... produced in the hydrogen plant, catalytic reforming unit, isomerization unit, and HF alkylation process... Association Standard 2377-86, Test for Hydrogen Sulfide and Carbon Dioxide in Natural Gas Using Length of... the 95-percent confidence interval for the distribution of daily ratios based on the 10...

  15. Influences of the Tamarisk Leaf Beetle (Diorhabda carinulata) on the diet of insectivorous birds along the Dolores River in Southwestern Colorado

    USGS Publications Warehouse

    Puckett, Sarah L.; van Riper, Charles

    2014-01-01

    We examined the effects of a biologic control agent, the tamarisk leaf beetle (Diorhabda carinulata), on native avifauna in southwestern Colorado, specifically, addressing whether and to what degree birds eat tamarisk leaf beetles. In 2010, we documented avian foraging behavior, characterized the arthropod community, sampled bird diets, and undertook an experiment to determine whether tamarisk leaf beetles are palatable to birds. We observed that tamarisk leaf beetles compose 24.0 percent (95-percent-confidence interval, 19.9-27.4 percent) and 35.4 percent (95-percent-confidence interval, 32.4-45.1 percent) of arthropod abundance and biomass in the study area, respectively. Birds ate few tamarisk leaf beetles, despite a superabundance of D. carinulata in the environment. The frequency of occurrence of tamarisk leaf beetles in bird diets was 2.1 percent (95-percent-confidence interval, 1.3- 2.9 percent) by abundance and 3.4 percent (95-percent-confidence interval, 2.6-4.2 percent) by biomass. Thus, tamarisk leaf beetles probably do not contribute significantly to the diets of birds in areas where biologic control of tamarisk is being applied.

  16. Anomalous Evidence, Confidence Change, and Theory Change.

    PubMed

    Hemmerich, Joshua A; Van Voorhis, Kellie; Wiley, Jennifer

    2016-08-01

    A novel experimental paradigm that measured theory change and confidence in participants' theories was used in three experiments to test the effects of anomalous evidence. Experiment 1 varied the amount of anomalous evidence to see if "dose size" made incremental changes in confidence toward theory change. Experiment 2 varied whether anomalous evidence was convergent (of multiple types) or replicating (similar finding repeated). Experiment 3 varied whether participants were provided with an alternative theory that explained the anomalous evidence. All experiments showed that participants' confidence changes were commensurate with the amount of anomalous evidence presented, and that larger decreases in confidence predicted theory changes. Convergent evidence and the presentation of an alternative theory led to larger confidence change. Convergent evidence also caused more theory changes. Even when people do not change theories, factors pertinent to the evidence and alternative theories decrease their confidence in their current theory and move them incrementally closer to theory change.

  17. Reflex Project: Using Model-Data Fusion to Characterize Confidence in Analyzes and Forecasts of Terrestrial C Dynamics

    NASA Astrophysics Data System (ADS)

    Fox, A. M.; Williams, M.; Richardson, A.; Cameron, D.; Gove, J. H.; Ricciuto, D. M.; Tomalleri, E.; Trudinger, C.; van Wijk, M.; Quaife, T.; Li, Z.

    2008-12-01

    The Regional Flux Estimation Experiment, REFLEX, is a model-data fusion inter-comparison project, aimed at comparing the strengths and weaknesses of various model-data fusion techniques for estimating carbon model parameters and predicting carbon fluxes and states. The key question addressed here is: what are the confidence intervals on (a) model parameters calibrated from eddy covariance (EC) and leaf area index (LAI) data and (b) on model analyses and predictions of net ecosystem C exchange (NEE) and carbon stocks? The experiment has an explicit focus on how different algorithms and protocols quantify the confidence intervals on parameter estimates and model forecasts, given the same model and data. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. Both observed daily NEE data from FluxNet sites and synthetic NEE data, generated by a model, were used to estimate the parameters and states of a simple C dynamics model. The results of the analyses supported the hypothesis that parameters linked to fast-response processes that mostly determine net ecosystem exchange of CO2 (NEE) were well constrained and well characterised. Parameters associated with turnover of wood and allocation to roots, only indirectly related to NEE, were poorly characterised. There was only weak agreement on estimations of uncertainty on NEE and its components, photosynthesis and ecosystem respiration, with some algorithms successfully locating the true values of these fluxes from synthetic experiments within relatively narrow 90% confidence intervals. This exercise has demonstrated that a range of techniques exist that can generate useful estimates of parameter probability density functions for C models from eddy covariance time series data. When these parameter PDFs are propagated to generate estimates of annual C fluxes there was a wide variation in size of the 90% confidence intervals. However, some algorithms were able to make

  18. Confidence to cook vegetables and the buying habits of Australian households.

    PubMed

    Winkler, Elisabeth; Turrell, Gavin

    2009-10-01

    Cooking skills are emphasized in nutrition promotion but their distribution among population subgroups and relationship to dietary behavior is researched by few population-based studies. This study examined the relationships between confidence to cook, sociodemographic characteristics, and household vegetable purchasing. This cross-sectional study of 426 randomly selected households in Brisbane, Australia, used a validated questionnaire to assess household vegetable purchasing habits and the confidence to cook of the person who most often prepares food for these households. The mutually adjusted odds ratios (ORs) of lacking confidence to cook were assessed across a range of demographic subgroups using multiple logistic regression models. Similarly, mutually adjusted mean vegetable purchasing scores were calculated using multiple linear regression for different population groups and for respondents with varying confidence levels. Lacking confidence to cook using a variety of techniques was more common among respondents with less education (OR 3.30; 95% confidence interval [CI] 1.01 to 10.75) and was less common among respondents who lived with minors (OR 0.22; 95% CI 0.09 to 0.53) and other adults (OR 0.43; 95% CI 0.24 to 0.78). Lack of confidence to prepare vegetables was associated with being male (OR 2.25; 95% CI 1.24 to 4.08), low education (OR 6.60; 95% CI 2.08 to 20.91), lower household income (OR 2.98; 95% CI 1.02 to 8.72) and living with other adults (OR 0.53; 95% CI 0.29 to 0.98). Households bought a greater variety of vegetables on a regular basis when the main chef was confident to prepare them (difference: 18.60; 95% CI 14.66 to 22.54), older (difference: 8.69; 95% CI 4.92 to 12.47), lived with at least one other adult (difference: 5.47; 95% CI 2.82 to 8.12) or at least one minor (difference: 2.86; 95% CI 0.17 to 5.55). Cooking skills may contribute to socioeconomic dietary differences, and may be a useful strategy for promoting fruit and vegetable

  19. Confidence measurement in the light of signal detection theory.

    PubMed

    Massoni, Sébastien; Gajdos, Thibault; Vergnaud, Jean-Christophe

    2014-01-01

    We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale), the Quadratic Scoring Rule (a post-wagering procedure), and the Matching Probability (MP; a generalization of the no-loss gambling method). We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory (SDT). We find that the MP provides better results in that respect. We conclude that MP is particularly well suited for studies of confidence that use SDT as a theoretical framework.

  20. Cortical alpha activity predicts the confidence in an impending action

    PubMed Central

    Kubanek, Jan; Hill, N. Jeremy; Snyder, Lawrence H.; Schalk, Gerwin

    2015-01-01

    When we make a decision, we experience a degree of confidence that our choice may lead to a desirable outcome. Recent studies in animals have probed the subjective aspects of the choice confidence using confidence-reporting tasks. These studies showed that estimates of the choice confidence substantially modulate neural activity in multiple regions of the brain. Building on these findings, we investigated the neural representation of the confidence in a choice in humans who explicitly reported the confidence in their choice. Subjects performed a perceptual decision task in which they decided between choosing a button press or a saccade while we recorded EEG activity. Following each choice, subjects indicated whether they were sure or unsure about the choice. We found that alpha activity strongly encodes a subject's confidence level in a forthcoming button press choice. The neural effect of the subjects' confidence was independent of the reaction time and independent of the sensory input modeled as a decision variable. Furthermore, the effect is not due to a general cognitive state, such as reward expectation, because the effect was specifically observed during button press choices and not during saccade choices. The neural effect of the confidence in the ensuing button press choice was strong enough that we could predict, from independent single trial neural signals, whether a subject was going to be sure or unsure of an ensuing button press choice. In sum, alpha activity in human cortex provides a window into the commitment to make a hand movement. PMID:26283892

  1. Confidence measurement in the light of signal detection theory

    PubMed Central

    Massoni, Sébastien; Gajdos, Thibault; Vergnaud, Jean-Christophe

    2014-01-01

    We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale), the Quadratic Scoring Rule (a post-wagering procedure), and the Matching Probability (MP; a generalization of the no-loss gambling method). We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory (SDT). We find that the MP provides better results in that respect. We conclude that MP is particularly well suited for studies of confidence that use SDT as a theoretical framework. PMID:25566135

  2. Relating confidence to measured information uncertainty in qualitative reasoning

    SciTech Connect

    Chavez, Gregory M; Zerkle, David K; Key, Brian P; Shevitz, Daniel W

    2010-10-07

    Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to relate confidence to the available information uncertainty in the result using fuzzy sets. Information uncertainty can be quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values.

  3. The antecedents and belief-polarized effects of thought confidence.

    PubMed

    Chou, Hsuan-Yi; Lien, Nai-Hwa; Liang, Kuan-Yu

    2011-01-01

    This article investigates 2 possible antecedents of thought confidence and explores the effects of confidence induced before or during ad exposure. The results of the experiments indicate that both consumers' dispositional optimism and spokesperson attractiveness have significant effects on consumers' confidence in thoughts that are generated after viewing the advertisement. Higher levels of thought confidence will influence the quality of the thoughts that people generate, lead to either positively or negatively polarized message processing, and therefore induce better or worse advertising effectiveness, depending on the valence of thoughts. The authors posit the belief-polarization hypothesis to explain these findings. PMID:21902013

  4. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

    PubMed

    Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

    2015-03-01

    We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population.

  5. An Event Restriction Interval Theory of Tense

    ERIC Educational Resources Information Center

    Beamer, Brandon Robert

    2012-01-01

    This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event…

  6. Using the Delta Method for Approximate Interval Estimation of Parameter Functions in SEM

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2004-01-01

    In applications of structural equation modeling, it is often desirable to obtain measures of uncertainty for special functions of model parameters. This article provides a didactic discussion of how a method widely used in applied statistics can be employed for approximate standard error and confidence interval evaluation of such functions. The…

  7. Meta-analytic interval estimation for standardized and unstandardized mean differences.

    PubMed

    Bonett, Douglas G

    2009-09-01

    The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected studies represent a random sample from a large superpopulation of studies. The RE approach cannot be justified in typical meta-analysis applications in which studies are nonrandomly selected. New FE meta-analytic confidence intervals for unstandardized and standardized mean differences are proposed that are easy to compute and perform properly under effect-size heterogeneity and nonrandomly selected studies. The proposed meta-analytic confidence intervals may be used to combine unstandardized or standardized mean differences from studies having either independent samples or dependent samples and may also be used to integrate results from previous studies into a new study. An alternative approach to assessing effect-size heterogeneity is presented.

  8. Interval Estimation of Standardized Mean Differences in Paired-Samples Designs

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2015-01-01

    Paired-samples designs are used frequently in educational and behavioral research. In applications where the response variable is quantitative, researchers are encouraged to supplement the results of a paired-samples t-test with a confidence interval (CI) for a mean difference or a standardized mean difference. Six CIs for standardized mean…

  9. Cancer mortality in workers exposed to 2,3,7,8-tetrachlorodibenzo-p-dioxin

    SciTech Connect

    Fingerhut, M.A.; Halperin, W.E.; Marlow, D.A.; Piacitelli, L.A.; Honchar, P.A.; Sweeney, M.H.; Greife, A.L.; Dill, P.A.; Steenland, K.; Suruda, A.J. )

    1991-01-24

    In both animal and epidemiologic studies, exposure to dioxin (2,3,7,8-tetrachlorodibenzo-p-dioxin, or TCDD) has been associated with an increased risk of cancer. We conducted a retrospective cohort study of mortality among the 5172 workers at 12 plants in the United States that produced chemicals contaminated with TCDD. Occupational exposure was documented by reviewing job descriptions and by measuring TCDD in serum from a sample of 253 workers. Causes of death were taken from death certificates. Mortality from several cancers previously associated with TCDD (stomach, liver, and nasal cancers, Hodgkin's disease, and non-Hodgkin's lymphoma) was not significantly elevated in this cohort. Mortality from soft-tissue sarcoma was increased, but not significantly (4 deaths; standardized mortality ratio (SMR), 338; 95 percent confidence interval, 92 to 865). In the subcohort of 1520 workers with greater than or equal to 1 year of exposure and greater than or equal to 20 years of latency, however, mortality was significantly increased for soft-tissue sarcoma (3 deaths; SMR, 922; 95 percent confidence interval, 190 to 2695) and for cancers of the respiratory system (SMR, 142; 95 percent confidence interval, 103 to 192). Mortality from all cancers combined was slightly but significantly elevated in the overall cohort (SMR, 115; 95 percent confidence interval, 102 to 130) and was higher in the subcohort with greater than or equal to 1 year of exposure and greater than or equal to 20 years of latency (SMR, 146; 95 percent confidence interval, 121 to 176). This study of mortality among workers with occupational exposure to TCDD does not confirm the high relative risks reported for many cancers in previous studies. Conclusions about an increase in the risk of soft-tissue sarcoma are limited by small numbers and misclassification on death certificates.

  10. Chaotic dynamics from interspike intervals.

    PubMed

    Pavlov, A N; Sosnovtseva, O V; Mosekilde, E; Anishchenko, V S

    2001-03-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov exponent (LE) from point processes differ between the two models. We also consider the problem of estimating the second LE and the possibility to diagnose hyperchaotic behavior by processing spike trains. Since the second exponent is quite sensitive to the structure of the ISI series, we investigate the problem of its computation. PMID:11308739

  11. Chaotic dynamics from interspike intervals

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Sosnovtseva, Olga V.; Mosekilde, Erik; Anishchenko, Vadim S.

    2001-03-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov exponent (LE) from point processes differ between the two models. We also consider the problem of estimating the second LE and the possibility to diagnose hyperchaotic behavior by processing spike trains. Since the second exponent is quite sensitive to the structure of the ISI series, we investigate the problem of its computation.

  12. Nearly 95 Percent of Low-Income Uninsured Children Now Are Eligible for Medicaid or SCHIP: Measures Need To Increase Enrollment among Eligible but Uninsured Children.

    ERIC Educational Resources Information Center

    Broaddus, Matthew; Ku, Leighton

    Recent expansions in Medicaid coverage for children and state health insurance programs for children mean that the majority of low-income children in the United States now are eligible for health insurance. A new analysis of Census data, presented in this report, finds that 94% of all uninsured children with family incomes below twice the poverty…

  13. Information and Communication: Tools for Increasing Confidence in the Schools.

    ERIC Educational Resources Information Center

    Achilles, C. M.; Lintz, M. N.

    Beginning with a review of signs and signals of public attitudes toward American education over the last 15 years, this paper analyzes some concerns regarding public confidence in public schools. Following a brief introduction, issues involved in the definition and behavioral attributes of confidence are mentioned. A synopsis of three approaches…

  14. Confidence and memory: assessing positive and negative correlations.

    PubMed

    Roediger, Henry L; DeSoto, K Andrew

    2014-01-01

    The capacity to learn and remember surely evolved to help animals solve problems in their quest to reproduce and survive. In humans we assume that metacognitive processes also evolved, so that we know when to trust what we remember (i.e., when we have high confidence in our memories) and when not to (when we have low confidence). However this latter feature has been questioned by researchers, with some finding a high correlation between confidence and accuracy in reports from memory and others finding little to no correlation. In two experiments we report a recognition memory paradigm that, using the same materials (categorised lists), permits the study of positive correlations, zero correlations, and negative correlations between confidence and accuracy within the same procedure. We had subjects study words from semantic categories with the five items most frequently produced in norms omitted from the list; later, subjects were given an old/new recognition test and made confidence ratings on their judgements. Although the correlation between confidence and accuracy for studied items was generally positive, the correlation for the five omitted items was negative in some methods of analysis. We pinpoint the similarity between lures and targets as creating inversions between confidence and accuracy in memory. We argue that, while confidence is generally a useful indicant of accuracy in reports from memory, in certain environmental circumstances even adaptive processes can foster illusions of memory. Thus understanding memory illusions is similar to understanding perceptual illusions: Processes that are usually adaptive can go awry under certain circumstances.

  15. The Metamemory Approach to Confidence: A Test Using Semantic Memory

    ERIC Educational Resources Information Center

    Brewer, William F.; Sampaio, Cristina

    2012-01-01

    The metamemory approach to memory confidence was extended and elaborated to deal with semantic memory tasks. The metamemory approach assumes that memory confidence is based on the products and processes of a completed memory task, as well as metamemory beliefs that individuals have about how their memory products and processes relate to memory…

  16. True and False Memories, Parietal Cortex, and Confidence Judgments

    ERIC Educational Resources Information Center

    Urgolites, Zhisen J.; Smith, Christine N.; Squire, Larry R.

    2015-01-01

    Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory).…

  17. Confidence Sharing in the Vocational Counselling Interview: Emergence and Repercussions

    ERIC Educational Resources Information Center

    Olry-Louis, Isabelle; Bremond, Capucine; Pouliot, Manon

    2012-01-01

    Confidence sharing is an asymmetrical dialogic episode to which both parties consent, in which one reveals something personal to the other who participates in the emergence and unfolding of the confidence. We describe how this is achieved at a discursive level within vocational counselling interviews. Based on a corpus of 64 interviews, we analyse…

  18. Utilitarian Model of Measuring Confidence within Knowledge-Based Societies

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Hung, Kuan-Ming; Liu, Chia Ju; Chiu, Houn Lin

    2009-01-01

    This paper introduces a utilitarian confidence testing statistic called Risk Inclination Model (RIM) which indexes all possible confidence wagering combinations within the confines of a defined symmetrically point-balanced test environment. This paper presents the theoretical underpinnings, a formal derivation, a hypothetical application, and…

  19. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

    ERIC Educational Resources Information Center

    Jin, Tan; Mak, Barley; Zhou, Pei

    2012-01-01

    The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

  20. Confidence vs. Authority: Visions of the Writer in Rhetorical Theory.

    ERIC Educational Resources Information Center

    Perdue, Virginia

    By building up the confidence of student writers, writing teachers hope to reduce the hostility and anxiety so often found in authoritarian introductory college composition classes. Process oriented writing theory implicitly defines confidence as a wholly personal quality resulting from students' discovery that they do have "something to say" to…

  1. A Rasch Analysis of the Teachers Music Confidence Scale

    ERIC Educational Resources Information Center

    Yim, Hoi Yin Bonnie; Abd-El-Fattah, Sabry; Lee, Lai Wan Maria

    2007-01-01

    This article presents a new measure of teachers' confidence to conduct musical activities with young children; Teachers Music Confidence Scale (TMCS). The TMCS was developed using a sample of 284 in-service and pre-service early childhood teachers in Hong Kong Special Administrative Region (HKSAR). The TMCS consisted of 10 musical activities.…

  2. Music Education Preservice Teachers' Confidence in Resolving Behavior Problems

    ERIC Educational Resources Information Center

    Hedden, Debra G.

    2015-01-01

    The purpose of this study was to investigate whether there would be a change in preservice teachers' (a) confidence concerning the resolution of behavior problems, (b) tactics for resolving them, (c) anticipation of problems, (d) fears about management issues, and (e) confidence in methodology and pedagogy over the time period of a one-semester…

  3. The Self-Consistency Model of Subjective Confidence

    ERIC Educational Resources Information Center

    Koriat, Asher

    2012-01-01

    How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen…

  4. Prospective Teachers' Problem Solving Skills and Self-Confidence Levels

    ERIC Educational Resources Information Center

    Gursen Otacioglu, Sena

    2008-01-01

    The basic objective of the research is to determine whether the education that prospective teachers in different fields receive is related to their levels of problem solving skills and self-confidence. Within the mentioned framework, the prospective teachers' problem solving and self-confidence levels have been examined under several variables.…

  5. A (revised) confidence index for the forecasting of meteor showers

    NASA Astrophysics Data System (ADS)

    Vaubaillon, J.

    2016-01-01

    A confidence index for the forecasting of meteor showers is presented. The goal is to provide users with information regarding the way the forecasting is performed, so several degrees of confidence is achieved. This paper presents the meaning of the index coding system.

  6. RIASEC Interest and Confidence Cutoff Scores: Implications for Career Counseling

    ERIC Educational Resources Information Center

    Bonitz, Verena S.; Armstrong, Patrick Ian; Larson, Lisa M.

    2010-01-01

    One strategy commonly used to simplify the joint interpretation of interest and confidence inventories is the use of cutoff scores to classify individuals dichotomously as having high or low levels of confidence and interest, respectively. The present study examined the adequacy of cutoff scores currently recommended for the joint interpretation…

  7. Modeling Confidence and Response Time in Recognition Memory

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  8. Confidence and Gender Differences on the Mental Rotations Test

    ERIC Educational Resources Information Center

    Cooke-Simpson, Amanda; Voyer, Daniel

    2007-01-01

    The present study examined the relation between self-reported confidence ratings, performance on the Mental Rotations Test (MRT), and guessing behavior on the MRT. Eighty undergraduate students (40 males, 40 females) completed the MRT while rating their confidence in the accuracy of their answers for each item. As expected, gender differences in…

  9. Understanding public confidence in government to prevent terrorist attacks.

    SciTech Connect

    Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

    2008-04-02

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

  10. Confidence set interference with a prior quadratic bound. [in geophysics

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1989-01-01

    Neyman's (1937) theory of confidence sets is developed as a replacement for Bayesian interference (BI) and stochastic inversion (SI) when the prior information is a hard quadratic bound. It is recommended that BI and SI be replaced by confidence set interference (CSI) only in certain circumstances. The geomagnetic problem is used to illustrate the general theory of CSI.

  11. High resolution time interval counter

    DOEpatents

    Condreva, Kenneth J.

    1994-01-01

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.

  12. High resolution time interval counter

    DOEpatents

    Condreva, K.J.

    1994-07-26

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.

  13. Postexercise Hypotension After Continuous, Aerobic Interval, and Sprint Interval Exercise.

    PubMed

    Angadi, Siddhartha S; Bhammar, Dharini M; Gaesser, Glenn A

    2015-10-01

    We examined the effects of 3 exercise bouts, differing markedly in intensity, on postexercise hypotension (PEH). Eleven young adults (age: 24.6 ± 3.7 years) completed 4 randomly assigned experimental conditions: (a) control, (b) 30-minute steady-state exercise (SSE) at 75-80% maximum heart rate (HRmax), (4) aerobic interval exercise (AIE): four 4-minute bouts at 90-95% HRmax, separated by 3 minutes of active recovery, and (d) sprint interval exercise (SIE): six 30-second Wingate sprints, separated by 4 minutes of active recovery. Exercise was performed on a cycle ergometer. Blood pressure (BP) was measured before exercise and every 15-minute postexercise for 3 hours. Linear mixed models were used to compare BP between trials. During the 3-hour postexercise, systolic BP (SBP) was lower (p < 0.001) after AIE (118 ± 10 mm Hg), SSE (121 ± 10 mm Hg), and SIE (121 ± 11 mm Hg) compared with control (124 ± 8 mm Hg). Diastolic BP (DBP) was also lower (p < 0.001) after AIE (66 ± 7 mm Hg), SSE (69 ± 6 mm Hg), and SIE (68 ± 8 mm Hg) compared with control (71 ± 7 mm Hg). Only AIE resulted in sustained (>2 hours) PEH, with SBP (120 ± 9 mm Hg) and DBP (68 ± 7 mm Hg) during the third-hour postexercise being lower (p ≤ 0.05) than control (124 ± 8 and 70 ± 7 mm Hg). Although all exercise bouts produced similar reductions in BP at 1-hour postexercise, the duration of PEH was greatest after AIE.

  14. Can nursing students' confidence levels increase with repeated simulation activities?

    PubMed

    Cummings, Cynthia L; Connelly, Linda K

    2016-01-01

    In 2014, nursing faculty conducted a study with undergraduate nursing students on their satisfaction, confidence, and educational practice levels, as it related to simulation activities throughout the curriculum. The study was a voluntary survey conducted on junior and senior year nursing students. It consisted of 30 items based on the Student Satisfaction and Self-Confidence in Learning and the Educational Practices Questionnaire (Jeffries, 2012). Mean averages were obtained for each of the 30 items from both groups and were compared using T scores for unpaired means. The results showed that 8 of the items had a 95% confidence level and when combined the items were significant for p <.001. The items identified were those related to self-confidence and active learning. Based on these findings, it can be assumed that repeated simulation experiences can lead to an increase in student confidence and active learning. PMID:26599594

  15. Can nursing students' confidence levels increase with repeated simulation activities?

    PubMed

    Cummings, Cynthia L; Connelly, Linda K

    2016-01-01

    In 2014, nursing faculty conducted a study with undergraduate nursing students on their satisfaction, confidence, and educational practice levels, as it related to simulation activities throughout the curriculum. The study was a voluntary survey conducted on junior and senior year nursing students. It consisted of 30 items based on the Student Satisfaction and Self-Confidence in Learning and the Educational Practices Questionnaire (Jeffries, 2012). Mean averages were obtained for each of the 30 items from both groups and were compared using T scores for unpaired means. The results showed that 8 of the items had a 95% confidence level and when combined the items were significant for p <.001. The items identified were those related to self-confidence and active learning. Based on these findings, it can be assumed that repeated simulation experiences can lead to an increase in student confidence and active learning.

  16. Linking learning and confidence in developing expert practice.

    PubMed

    Currie, Kay

    2008-01-01

    This paper presents findings from a recent PhD grounded theory study exploring the practice development role of graduate specialist practitioners. A key finding within this theory is the influence of learning and confidence on the practitioner journey. The concept of confidence emerged repeatedly throughout the analysis and can be characterized as a motivational driver, a consequence of learning and gaining respect, and a condition for graduate specialist practitioners' moving on to impact in practice development. Analysis of the concept of confidence as it influences practice is limited in existing literature. This article seeks to address this gap by illustrating the centrality of learning and confidence in the development of expert specialist practices. It is anticipated that these findings will resonate with the experiences of clinicians and faculty internationally and heightened awareness of consequences of developing confidence can be utilized to strengthen the impact of a wide range of nursing programs.

  17. Development of a core confidence-higher order construct.

    PubMed

    Stajkovic, Alexander D

    2006-11-01

    The author develops core confidence as a higher order construct and suggests that a core confidence-higher order construct--not addressed by extant work motivation theories--is helpful in better understanding employee motivation in today's rapidly changing organizations. Drawing from psychology (social, clinical, and developmental) and social anthropology, the author develops propositions regarding the relationships between core confidence and performance, attitudes, and subjective well-being. The core confidence-higher order construct is proposed to be manifested by hope, self-efficacy, optimism, and resilience. The author reasons that these four variables share a common confidence core (a higher order construct) and may be considered as its manifestations. Suggestions for future research and implications of the work are discussed. PMID:17100479

  18. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results

    SciTech Connect

    Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A.

    2013-08-01

    To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets.

  19. The Confidence Factor: Some Results of the Phi Delta Kappa (PDK) Commission on Public Confidence in Education. A Research Report.

    ERIC Educational Resources Information Center

    Wayson, W. W.; And Others

    This study sought to determine characteristics of schools and districts that enjoy the public's strong confidence and to explore how these characteristics are created and retained. A screening procedure produced useable data from 181 "high-confidence" public schools, 30 private schools, and 45 school districts. As part of a preliminary pilot…

  20. Playing with confidence: the relationship between imagery use and self-confidence and self-efficacy in youth soccer players.

    PubMed

    Munroe-Chandler, Krista; Hall, Craig; Fishburne, Graham

    2008-12-01

    Confidence has been one of the most consistent factors in distinguishing the successful from the unsuccessful athletes (Gould, Weiss, & Weinberg, 1981) and Bandura (1997) proposed that imagery is one way to enhance confidence. Therefore, the purpose of the present study was to examine the relationship between imagery use and confidence in soccer (football) players. The participants included 122 male and female soccer athletes ages 11-14 years participating in both house/ recreation (n = 72) and travel/competitive (n = 50) levels. Athletes completed three questionnaires; one measuring the frequency of imagery use, one assessing generalised self-confidence, and one assessing self-efficacy in soccer. A series of regression analyses found that Motivational General-Mastery (MG-M) imagery was a signifant predictor of self-confidence and self-efficacy in both recreational and competitive youth soccer players. More specifically, MG-M imagery accounted for between 40 and 57% of the variance for both self-confidence and self-efficacy with two other functions (MG-A and MS) contributing marginally in the self-confidence regression for recreational athletes. These findings suggest that if a youth athlete, regardless of competitive level, wants to increase his/her self-confidence or self-efficacy through the use of imagery, the MG-M function should be emphasised. PMID:18949659

  1. What Are Confidence Judgments Made of? Students' Explanations for Their Confidence Ratings and What that Means for Calibration

    ERIC Educational Resources Information Center

    Dinsmore, Daniel L.; Parkinson, Meghan M.

    2013-01-01

    Although calibration has been widely studied, questions remain about how best to capture confidence ratings, how to calculate continuous variable calibration indices, and on what exactly students base their reported confidence ratings. Undergraduates in a research methods class completed a prior knowledge assessment, two sets of readings and…

  2. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  3. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    PubMed Central

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  4. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks.

    PubMed

    Hedne, Mikael R; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the "warmth ratings" previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  5. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks.

    PubMed

    Hedne, Mikael R; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the "warmth ratings" previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.

  6. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    PubMed Central

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.

  7. Notes on interval estimation of the generalized odds ratio under stratified random sampling.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2013-05-01

    It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.

  8. Microsatellite Instability Status of Interval Colorectal Cancers in a Korean Population

    PubMed Central

    Lee, Kil Woo; Park, Soo-Kyung; Yang, Hyo-Joon; Jung, Yoon Suk; Choi, Kyu Yong; Kim, Kyung Eun; Jung, Kyung Uk; Kim, Hyung Ook; Kim, Hungdai; Chun, Ho-Kyung; Park, Dong Il

    2016-01-01

    Background/Aims A subset of patients may develop colorectal cancer after a colonoscopy that is negative for malignancy. These missed or de novo lesions are referred to as interval cancers. The aim of this study was to determine whether interval colon cancers are more likely to result from the loss of function of mismatch repair genes than sporadic cancers and to demonstrate microsatellite instability (MSI). Methods Interval cancer was defined as a cancer that was diagnosed within 5 years of a negative colonoscopy. Among the patients who underwent an operation for colorectal cancer from January 2013 to December 2014, archived cancer specimens were evaluated for MSI by sequencing microsatellite loci. Results Of the 286 colon cancers diagnosed during the study period, 25 (8.7%) represented interval cancer. MSI was found in eight of the 25 patients (32%) that presented interval cancers compared with 22 of the 261 patients (8.4%) that presented sporadic cancers (p=0.002). In the multivariable logistic regression model, MSI was associated with interval cancer (OR, 3.91; 95% confidence interval, 1.38 to 11.05). Conclusions Interval cancers were approximately four times more likely to show high MSI than sporadic cancers. Our findings indicate that certain interval cancers may occur because of distinct biological features. PMID:27114419

  9. Stochasticity and the limits to confidence when estimating R0 of Ebola and other emerging infectious diseases.

    PubMed

    Taylor, Bradford P; Dushoff, Jonathan; Weitz, Joshua S

    2016-11-01

    Dynamic models - often deterministic in nature - were used to estimate the basic reproductive number, R0, of the 2014-5 Ebola virus disease (EVD) epidemic outbreak in West Africa. Estimates of R0 were then used to project the likelihood for large outbreak sizes, e.g., exceeding hundreds of thousands of cases. Yet fitting deterministic models can lead to over-confidence in the confidence intervals of the fitted R0, and, in turn, the type and scope of necessary interventions. In this manuscript we propose a hybrid stochastic-deterministic method to estimate R0 and associated confidence intervals (CIs). The core idea is that stochastic realizations of an underlying deterministic model can be used to evaluate the compatibility of candidate values of R0 with observed epidemic curves. The compatibility is based on comparing the distribution of expected epidemic growth rates with the observed epidemic growth rate given "process noise", i.e., arising due to stochastic transmission, recovery and death events. By applying our method to reported EVD case counts from Guinea, Liberia and Sierra Leone, we show that prior estimates of R0 based on deterministic fits appear to be more confident than analysis of stochastic trajectories suggests should be possible. Moving forward, we recommend including process noise among other sources of noise when estimating R0 CIs of emerging epidemics. Our hybrid procedure represents an adaptable and easy-to-implement approach for such estimation. PMID:27524644

  10. The QT Interval and Risk of Incident Atrial Fibrillation

    PubMed Central

    Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.

    2013-01-01

    BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693

  11. Confidence Region Estimation for Groundwater Parameter Identification Problems

    NASA Astrophysics Data System (ADS)

    Vugrin, K. W.; Swiler, L. P.; Roberts, R. M.

    2007-12-01

    This presentation focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F--test method, and a Log--Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well--test results. The confidence regions for each case study are analyzed and compared. Each of the three methods produce similar and reasonable confidence regions for the case study using synthetic data. The linear approximation method grossly overestimates the confidence region for the first groundwater parameter identification case study. The F--test and Log--Likelihood methods result in similar reasonable regions for this test case. For the second groundwater parameter identification case study, the linear approximation method produces a confidence region of reasonable size. In this test case, the F--test and Log--Likelihood methods generate disjoint confidence regions of reasonable size. The differing results, capabilities, and drawbacks of all three methods are discussed. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.

  12. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

    NASA Astrophysics Data System (ADS)

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-09-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

  13. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  14. Validation, Uncertainty, and Quantitative Reliability at Confidence (QRC)

    SciTech Connect

    Logan, R W; Nitta, C K

    2002-12-06

    This paper represents a summary of our methodology for Verification and Validation and Uncertainty Quantification. A graded scale methodology is presented and related to other concepts in the literature. We describe the critical nature of quantified Verification and Validation with Uncertainty Quantification at specified Confidence levels in evaluating system certification status. Only after Verification and Validation has contributed to Uncertainty Quantification at specified confidence can rational tradeoffs of various scenarios be made. Verification and Validation methods for various scenarios and issues are applied in assessments of Quantified Reliability at Confidence and we summarize briefly how this can lead to a Value Engineering methodology for investment strategy.

  15. Scaling of light and dark time intervals.

    PubMed

    Marinova, J

    1978-01-01

    Scaling of light and dark time intervals of 0.1 to 1.1 s is performed by the mehtod of magnitude estimation with respect to a given standard. The standards differ in duration and type (light and dark). The light intervals are subjectively estimated as longer than the dark ones. The relation between the mean interval estimations and their magnitude is linear for both light and dark intervals.

  16. Permutations and topological entropy for interval maps

    NASA Astrophysics Data System (ADS)

    Misiurewicz, Michal

    2003-05-01

    Recently Bandt, Keller and Pompe (2002 Entropy of interval maps via permutations Nonlinearity 15 1595-602) introduced a method of computing the entropy of piecewise monotone interval maps by counting permutations exhibited by initial pieces of orbits. We show that for topological entropy this method does not work for arbitrary continuous interval maps. We also show that for piecewise monotone interval maps topological entropy can be computed by counting permutations exhibited by periodic orbits.

  17. Petroleum distillate solvents as risk factors for undifferentiated connective tissue disease (UCTD).

    PubMed

    Lacey, J V; Garabrant, D H; Laing, T J; Gillespie, B W; Mayes, M D; Cooper, B C; Schottenfeld, D

    1999-04-15

    Occupational solvent exposure may increase the risk of connective tissue disease (CTD). The objective of this case-control study was to investigate the relation between undifferentiated connective tissue disease (UCTD) and solvent exposure in Michigan and Ohio. Women were considered to have UCTD if they did not meet the American College of Rheumatology classification criteria for any CTD but had at least two documented signs, symptoms, or laboratory abnormalities suggestive of a CTD. Detailed information on solvent exposure was ascertained from 205 cases, diagnosed between 1980 and 1992, and 2,095 population-based controls. Age-adjusted odds ratios (OR) and 95 percent confidence intervals (CI) were calculated for all exposures. Among 16 self-reported occupational activities with potential solvent exposure, furniture refinishing (OR = 9.73, 95 percent CI 1.48-63.90), perfume, cosmetic, or drug manufacturing (OR = 7.71, 95 percent CI 2.24-26.56), rubber product manufacturing (OR = 4.70, 95 percent CI 1.75-12.61), work in a medical diagnostic or pathology laboratory (OR = 4.52, 95 percent CI 2.27-8.97), and painting or paint manufacturing (OR = 2.87, 95 percent CI 1.06-7.76) were significantly associated with UCTD. After expert review of self-reported exposure to ten specific solvents, paint thinners or removers (OR = 2.73, 95 percent CI 1.80-4.16) and mineral spirits (OR = 1.81, 95 percent CI 1.09-3.02) were associated with UCTD. These results suggest that exposure to petroleum distillates increases the risk of developing UCTD.

  18. Confidence and the Stock Market: An Agent-Based Approach

    PubMed Central

    Bertella, Mario A.; Pires, Felipe R.; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations—indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior. PMID:24421888

  19. Measurement of tag confidence in user generated contents retrieval

    NASA Astrophysics Data System (ADS)

    Lee, Sihyoung; Min, Hyun-Seok; Lee, Young Bok; Ro, Yong Man

    2009-01-01

    As online image sharing services are becoming popular, the importance of correctly annotated tags is being emphasized for precise search and retrieval. Tags created by user along with user-generated contents (UGC) are often ambiguous due to the fact that some tags are highly subjective and visually unrelated to the image. They cause unwanted results to users when image search engines rely on tags. In this paper, we propose a method of measuring tag confidence so that one can differentiate confidence tags from noisy tags. The proposed tag confidence is measured from visual semantics of the image. To verify the usefulness of the proposed method, experiments were performed with UGC database from social network sites. Experimental results showed that the image retrieval performance with confidence tags was increased.

  20. 78 FR 56621 - Draft Waste Confidence Generic Environmental Impact Statement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... 2, 2013 (77 FR 65137). Results of that scoping process are documented in the ``Waste Confidence... Place, 8629 J.M. Keynes Drive, Charlotte, North Carolina 28262. November 6, 2013: Hyatt Regency...

  1. Confidence and the stock market: an agent-based approach.

    PubMed

    Bertella, Mario A; Pires, Felipe R; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations--indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior.

  2. The Sense of Confidence during Probabilistic Learning: A Normative Account

    PubMed Central

    Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas

    2015-01-01

    Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core

  3. The self-assessment of confidence, by one vocational trainee

    PubMed Central

    Leonard, Colin

    1979-01-01

    A list of important topics in general practice was constructed and a trainee was asked to indicate his confidence about each topic on a one to five scale. Repeated use showed different confidence ratings by the same trainee, and an attempt was made to correlate factual knowledge by using a multiple choice questionnaire. Despite important limitations, which are described, this method may be useful in identifying suitable topics for teaching during the trainee year. PMID:541789

  4. Leaders, self-confidence, and hubris: what's the difference?

    PubMed

    Kerfoot, Karlene M

    2010-01-01

    Success can easily breed hubris. As leaders become more confident, their success can limit their learning because they develop repetitive patterns of filtering information based on past successes and discount information that does not agree with their patterns of success. It is important for leaders to stay grounded in reality and effective as their success grows. Humility, gratitude, and appreciation will avoid the overconfidence that leads to hubris. Building confidence in others is the mark of a great leader. Hubris is not.

  5. Interval approach to braneworld gravity

    NASA Astrophysics Data System (ADS)

    Carena, Marcela; Lykken, Joseph; Park, Minjoon

    2005-10-01

    Gravity in five-dimensional braneworld backgrounds may exhibit extra scalar degrees of freedom with problematic features, including kinetic ghosts and strong coupling behavior. Analysis of such effects is hampered by the standard heuristic approaches to braneworld gravity, which use the equations of motion as the starting point, supplemented by orbifold projections and junction conditions. Here we develop the interval approach to braneworld gravity, which begins with an action principle. This shows how to implement general covariance, despite allowing metric fluctuations that do not vanish on the boundaries. We reproduce simple Z2 orbifolds of gravity, even though in this approach we never perform a Z2 projection. We introduce a family of “straight gauges”, which are bulk coordinate systems in which both branes appear as straight slices in a single coordinate patch. Straight gauges are extremely useful for analyzing metric fluctuations in braneworld models. By explicit gauge-fixing, we show that a general AdS5/AdS4 setup with two branes has at most a radion, but no physical “brane-bending” modes.

  6. Decision-related cortical potentials during an auditory signal detection task with cued observation intervals

    NASA Technical Reports Server (NTRS)

    Squires, K. C.; Squires, N. K.; Hillyard, S. A.

    1975-01-01

    Cortical-evoked potentials were recorded from human subjects performing an auditory detection task with confidence rating responses. Unlike earlier studies that used similar procedures, the observation interval during which the auditory signal could occur was clearly marked by a visual cue light. By precisely defining the observation interval and, hence, synchronizing all perceptual decisions to the evoked potential averaging epoch, it was possible to demonstrate that high-confidence false alarms are accompanied by late-positive P3 components equivalent to those for equally confident hits. Moreover the hit and false alarm evoked potentials were found to covary similarly with variations in confidence rating and to have similar amplitude distributions over the scalp. In a second experiment, it was demonstrated that correct rejections can be associated with a P3 component larger than that for hits. Thus it was possible to show, within the signal detection paradigm, how the two major factors of decision confidence and expectancy are reflected in the P3 component of the cortical-evoked potential.

  7. Learning to Make Collective Decisions: The Impact of Confidence Escalation

    PubMed Central

    Mahmoodi, Ali; Bang, Dan; Ahmadabadi, Majid Nili; Bahrami, Bahador

    2013-01-01

    Little is known about how people learn to take into account others’ opinions in joint decisions. To address this question, we combined computational and empirical approaches. Human dyads made individual and joint visual perceptual decision and rated their confidence in those decisions (data previously published). We trained a reinforcement (temporal difference) learning agent to get the participants’ confidence level and learn to arrive at a dyadic decision by finding the policy that either maximized the accuracy of the model decisions or maximally conformed to the empirical dyadic decisions. When confidences were shared visually without verbal interaction, RL agents successfully captured social learning. When participants exchanged confidences visually and interacted verbally, no collective benefit was achieved and the model failed to predict the dyadic behaviour. Behaviourally, dyad members’ confidence increased progressively and verbal interaction accelerated this escalation. The success of the model in drawing collective benefit from dyad members was inversely related to confidence escalation rate. The findings show an automated learning agent can, in principle, combine individual opinions and achieve collective benefit but the same agent cannot discount the escalation suggesting that one cognitive component of collective decision making in human may involve discounting of overconfidence arising from interactions. PMID:24324677

  8. Confidence-based somatic mutation evaluation and prioritization.

    PubMed

    Löwer, Martin; Renard, Bernhard Y; de Graaf, Jos; Wagner, Meike; Paret, Claudia; Kneip, Christoph; Türeci, Ozlem; Diken, Mustafa; Britten, Cedrik; Kreiter, Sebastian; Koslowski, Michael; Castle, John C; Sahin, Ugur

    2012-01-01

    Next generation sequencing (NGS) has enabled high throughput discovery of somatic mutations. Detection depends on experimental design, lab platforms, parameters and analysis algorithms. However, NGS-based somatic mutation detection is prone to erroneous calls, with reported validation rates near 54% and congruence between algorithms less than 50%. Here, we developed an algorithm to assign a single statistic, a false discovery rate (FDR), to each somatic mutation identified by NGS. This FDR confidence value accurately discriminates true mutations from erroneous calls. Using sequencing data generated from triplicate exome profiling of C57BL/6 mice and B16-F10 melanoma cells, we used the existing algorithms GATK, SAMtools and SomaticSNiPer to identify somatic mutations. For each identified mutation, our algorithm assigned an FDR. We selected 139 mutations for validation, including 50 somatic mutations assigned a low FDR (high confidence) and 44 mutations assigned a high FDR (low confidence). All of the high confidence somatic mutations validated (50 of 50), none of the 44 low confidence somatic mutations validated, and 15 of 45 mutations with an intermediate FDR validated. Furthermore, the assignment of a single FDR to individual mutations enables statistical comparisons of lab and computation methodologies, including ROC curves and AUC metrics. Using the HiSeq 2000, single end 50 nt reads from replicates generate the highest confidence somatic mutation call set.

  9. The Asteroid Identification Problem. II. Target Plane Confidence Boundaries

    NASA Astrophysics Data System (ADS)

    Milani, Andrea; Valsecchi, Giovanni B.

    1999-08-01

    The nominal orbit solution for an asteroid/comet resulting from a least squares fit to astrometric observations is surrounded by a region containing solutions equally compatible with the data, the confidence region. If the observed arc is not too short, and for an epoch close to the observations, the confidence region in the six-dimensional space of orbital elements is well approximated by an ellipsoid. This uncertainty of the orbital elements maps to a position uncertainty at close approach, which can be represented on a Modified Target Plane (MTP), a modification of the one used by Öpik. The MTP is orthogonal to the geocentric velocity at the closest approach point along the nominal orbit. In the linear approximation, the confidence ellipsoids are mapped on the MTP into concentric ellipses, computed by solving the variational equation. For an object observed at only one opposition, however, if the close approach is expected after many revolutions, the ellipses on the MTP become extremely elongated, therefore the linear approximation may fail, and the confidence boundaries on the MTP, by definition the nonlinear images of the confidence ellipsoids, may not be well approximated by the ellipses. In theory the Monte Carlo method by Muinonen and Bowell (1993, Icarus104, 255-279) can be used to compute the nonlinear confidence boundaries, but in practice the computational load is very heavy. We propose a new method to compute semilinear confidence boundaries on the MTP, based on the theory developed by Milani (1999, Icarus137, 269-292) to efficiently compute confidence boundaries for predicted observations. This method is a reasonable compromise between reliability and computational load, and can be used for real time risk assessment. These arguments can be applied to any small body approaching any planet, but in the case of a potentially hazardous object (PHO), either an asteroid or a comet whose orbit comes very close to that of the Earth, the application is most

  10. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    ERIC Educational Resources Information Center

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  11. Frequentist evaluation of intervals estimated for a binomial parameter and for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, Robert D.; Hymes, Kathryn E.; Tucker, Jordan

    2010-01-01

    Confidence intervals for a binomial parameter or for the ratio of Poisson means are commonly desired in high energy physics (HEP) applications such as measuring a detection efficiency or branching ratio. Due to the discreteness of the data, in both of these problems the frequentist coverage probability unfortunately depends on the unknown parameter. Trade-offs among desiderata have led to numerous sets of intervals in the statistics literature, while in HEP one typically encounters only the classic intervals of Clopper-Pearson (central intervals with no undercoverage but substantial over-coverage) or a few approximate methods which perform rather poorly. If strict coverage is relaxed, some sort of averaging is needed to compare intervals. In most of the statistics literature, this averaging is over different values of the unknown parameter, which is conceptually problematic from the frequentist point of view in which the unknown parameter is typically fixed. In contrast, we perform an (unconditional) average over observed data in the ratio-of-Poisson-means problem. If strict conditional coverage is desired, we recommend Clopper-Pearson intervals and intervals from inverting the likelihood ratio test (for central and non-central intervals, respectively). Lancaster's mid- P modification to either provides excellent unconditional average coverage in the ratio-of-Poisson-means problem.

  12. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  13. Emotor control: computations underlying bodily resource allocation, emotions, and confidence

    PubMed Central

    Kepecs, Adam; Mensh, Brett D.

    2015-01-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience—approaching subjective behavior as the result of mental computations instantiated in the brain—to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This “emotor” control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on “confidence.” Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840

  14. Market Confidence Predicts Stock Price: Beyond Supply and Demand.

    PubMed

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi; Zhang, Yuqing

    2016-01-01

    Stock price prediction is an important and challenging problem in stock market analysis. Existing prediction methods either exploit autocorrelation of stock price and its correlation with the supply and demand of stock, or explore predictive indictors exogenous to stock market. In this paper, using transaction record of stocks with identifier of traders, we introduce an index to characterize market confidence, i.e., the ratio of the number of traders who is active in two successive trading days to the number of active traders in a certain trading day. Strong Granger causality is found between the index of market confidence and stock price. We further predict stock price by incorporating the index of market confidence into a neural network based on time series of stock price. Experimental results on 50 stocks in two Chinese Stock Exchanges demonstrate that the accuracy of stock price prediction is significantly improved by the inclusion of the market confidence index. This study sheds light on using cross-day trading behavior to characterize market confidence and to predict stock price. PMID:27391816

  15. Market Confidence Predicts Stock Price: Beyond Supply and Demand.

    PubMed

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi; Zhang, Yuqing

    2016-01-01

    Stock price prediction is an important and challenging problem in stock market analysis. Existing prediction methods either exploit autocorrelation of stock price and its correlation with the supply and demand of stock, or explore predictive indictors exogenous to stock market. In this paper, using transaction record of stocks with identifier of traders, we introduce an index to characterize market confidence, i.e., the ratio of the number of traders who is active in two successive trading days to the number of active traders in a certain trading day. Strong Granger causality is found between the index of market confidence and stock price. We further predict stock price by incorporating the index of market confidence into a neural network based on time series of stock price. Experimental results on 50 stocks in two Chinese Stock Exchanges demonstrate that the accuracy of stock price prediction is significantly improved by the inclusion of the market confidence index. This study sheds light on using cross-day trading behavior to characterize market confidence and to predict stock price.

  16. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    PubMed

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  17. Market Confidence Predicts Stock Price: Beyond Supply and Demand

    PubMed Central

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi; Zhang, Yuqing

    2016-01-01

    Stock price prediction is an important and challenging problem in stock market analysis. Existing prediction methods either exploit autocorrelation of stock price and its correlation with the supply and demand of stock, or explore predictive indictors exogenous to stock market. In this paper, using transaction record of stocks with identifier of traders, we introduce an index to characterize market confidence, i.e., the ratio of the number of traders who is active in two successive trading days to the number of active traders in a certain trading day. Strong Granger causality is found between the index of market confidence and stock price. We further predict stock price by incorporating the index of market confidence into a neural network based on time series of stock price. Experimental results on 50 stocks in two Chinese Stock Exchanges demonstrate that the accuracy of stock price prediction is significantly improved by the inclusion of the market confidence index. This study sheds light on using cross-day trading behavior to characterize market confidence and to predict stock price. PMID:27391816

  18. Characteristics of successful opinion leaders in a bounded confidence model

    NASA Astrophysics Data System (ADS)

    Chen, Shuwei; Glass, David H.; McCartney, Mark

    2016-05-01

    This paper analyses the impact of competing opinion leaders on attracting followers in a social group based on a bounded confidence model in terms of four characteristics: reputation, stubbornness, appeal and extremeness. In the model, reputation differs among leaders and normal agents based on the weights assigned to them, stubbornness of leaders is reflected by their confidence towards normal agents, appeal of the leaders is represented by the confidence of followers towards them, and extremeness is captured by the opinion values of leaders. Simulations show that increasing reputation, stubbornness or extremeness makes it more difficult for the group to achieve consensus, but increasing the appeal will make it easier. The results demonstrate that successful opinion leaders should generally be less stubborn, have greater appeal and be less extreme in order to attract more followers in a competing environment. Furthermore, the number of followers can be very sensitive to small changes in these characteristics. On the other hand, reputation has a more complicated impact: higher reputation helps the leader to attract more followers when the group bound of confidence is high, but can hinder the leader from attracting followers when the group bound of confidence is low.

  19. The interval order polytope of a digraph

    SciTech Connect

    Mueller, R.; Schulz, A.

    1994-12-31

    Interval orders and their cocomparability graphs, the interval graphs, are of significant importance as structures of solutions for several combinatorial optimization problems. This is due to the fact that each element is associated with an interval, which may be interpreted as a time interval, for example in a schedule, or as a substring in a string of items, for example, a substring of a DNA string in molecular biology. In the talk we show that the interval order polytope of a digraph may serve as a basis for a polyhedral combinatorial approach to this class of problems. We present results on odd cycle and clique based valid inequalities and discuss the complexity of their separation problem. We show that well-known valid inequalities of the linear ordering polytope, as, e.g., Mobius ladder inequalities and fence inequalities obtain a natural interpretation in terms of these inequalities of the interval order polytope.

  20. Selecting accurate statements from the cognitive interview using confidence ratings.

    PubMed

    Roberts, Wayne T; Higham, Philip A

    2002-03-01

    Participants viewed a videotape of a simulated murder, and their recall (and confidence) was tested 1 week later with the cognitive interview. Results indicated that (a) the subset of statements assigned high confidence was more accurate than the full set of statements; (b) the accuracy benefit was limited to information that forensic experts considered relevant to an investigation, whereas peripheral information showed the opposite pattern; (c) the confidence-accuracy relationship was higher for relevant than for peripheral information; (d) the focused-retrieval phase was associated with a greater proportion of peripheral and a lesser proportion of relevant information than the other phases; and (e) only about 50% of the relevant information was elicited, and most of this was elicited in Phase 1.

  1. Confidence and certainty: distinct probabilistic quantities for different goals.

    PubMed

    Pouget, Alexandre; Drugowitsch, Jan; Kepecs, Adam

    2016-03-01

    When facing uncertainty, adaptive behavioral strategies demand that the brain performs probabilistic computations. In this probabilistic framework, the notion of certainty and confidence would appear to be closely related, so much so that it is tempting to conclude that these two concepts are one and the same. We argue that there are computational reasons to distinguish between these two concepts. Specifically, we propose that confidence should be defined as the probability that a decision or a proposition, overt or covert, is correct given the evidence, a critical quantity in complex sequential decisions. We suggest that the term certainty should be reserved to refer to the encoding of all other probability distributions over sensory and cognitive variables. We also discuss strategies for studying the neural codes for confidence and certainty and argue that clear definitions of neural codes are essential to understanding the relative contributions of various cortical areas to decision making. PMID:26906503

  2. Confidence as a Common Currency between Vision and Audition

    PubMed Central

    de Gardelle, Vincent; Le Corre, François; Mamassian, Pascal

    2016-01-01

    The idea of a common currency underlying our choice behaviour has played an important role in sciences of behaviour, from neurobiology to psychology and economics. However, while it has been mainly investigated in terms of values, with a common scale on which goods would be evaluated and compared, the question of a common scale for subjective probabilities and confidence in particular has received only little empirical investigation so far. The present study extends previous work addressing this question, by showing that confidence can be compared across visual and auditory decisions, with the same precision as for the comparison of two trials within the same task. We discuss the possibility that confidence could serve as a common currency when describing our choices to ourselves and to others. PMID:26808061

  3. Confidence region estimation techniques for nonlinear regression :three case studies.

    SciTech Connect

    Swiler, Laura Painton (Sandia National Laboratories, Albuquerque, NM); Sullivan, Sean P. (University of Texas, Austin, TX); Stucky-Mack, Nicholas J. (Harvard University, Cambridge, MA); Roberts, Randall Mark; Vugrin, Kay White

    2005-10-01

    This work focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F-test method, and a Log-Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well-test results. The confidence regions for each case study are analyzed and compared. Although the F-test and Log-Likelihood methods result in similar regions, there are differences between these regions and the regions generated by the linear approximation method for nonlinear problems. The differing results, capabilities, and drawbacks of all three methods are discussed.

  4. Does mood influence the realism of confidence judgments?

    PubMed

    Allwood, Carl Martin; Granhag, Pär Anders; Jonsson, Anna-Carin

    2002-07-01

    Previous research has shown that mood affects cognition, but the extent to which mood affects meta-cognitive judgments is a relatively over-looked issue. In the current study we investigated how mood influences the degree of realism in participants' confidence judgments (based on an episodic memory task). Using music and film in combination, we successfully induced an elated mood in half of the participants, but failed to induce a sad mood in the other half. In line with previous research, the participants in both conditions were overconfident in their judgments. However, and contrary to our prediction, our data indicated that there was no difference in the realism of the confidence between the conditions. When relating this result to previous research, our conclusion is that there is no, or very little, influence of mood of moderate intensity on the realism of confidence judgments.

  5. Stimulus properties of fixed-interval responses.

    PubMed

    Buchman, I B; Zeiler, M D

    1975-11-01

    Responses in the first component of a chained schedule produced a change to the terminal component according to a fixed-interval schedule. The number of responses emitted in the fixed interval determined whether a variable-interval schedule of food presentation or extinction prevailed in the terminal component. In one condition, the variable-interval schedule was in effect only if the number of responses during the fixed interval was less than that specified; in another condition, the number of responses had to exceed that specified. The number of responses emitted in the fixed interval did not shift markedly in the direction required for food presentation. Instead, responding often tended to change in the opposite direction. Such an effect indicated that differential food presentation did not modify the reference behavior in accord with the requirement, but it was consistent with other data on fixed-interval schedule performance. Behavior in the terminal component, however, did reveal sensitivity to the relation between total responses emitted in the fixed interval and the availability of food. Response rate in the terminal component was a function of the proximity of the response number emitted in the fixed interval to that required for food presentation. Thus, response number served as a discriminative stimulus controlling subsequent performance.

  6. A note on the path interval distance.

    PubMed

    Coons, Jane Ivy; Rusinko, Joseph

    2016-06-01

    The path interval distance accounts for global congruence between locally incongruent trees. We show that the path interval distance provides a lower bound for the nearest neighbor interchange distance. In contrast to the Robinson-Foulds distance, random pairs of trees are unlikely to be maximally distant from one another under the path interval distance. These features indicate that the path interval distance should play a role in phylogenomics where the comparison of trees on a fixed set of taxa is becoming increasingly important. PMID:27040521

  7. A note on the path interval distance.

    PubMed

    Coons, Jane Ivy; Rusinko, Joseph

    2016-06-01

    The path interval distance accounts for global congruence between locally incongruent trees. We show that the path interval distance provides a lower bound for the nearest neighbor interchange distance. In contrast to the Robinson-Foulds distance, random pairs of trees are unlikely to be maximally distant from one another under the path interval distance. These features indicate that the path interval distance should play a role in phylogenomics where the comparison of trees on a fixed set of taxa is becoming increasingly important.

  8. Golfers have better balance control and confidence than healthy controls.

    PubMed

    Gao, Kelly L; Hui-Chan, Christina W Y; Tsang, William W N

    2011-11-01

    In a well-executed golf swing, golfers must maintain good balance and precise control of posture. Golfing also requires prolonged walking over uneven ground such as a hilly course. Therefore, repeated golf practice may enhance balance control and confidence in the golfers. The objective is to investigate whether older golfers had better balance control and confidence than non-golfing older, healthy adults. This is a cross-sectional study, conducted at a University-based rehabilitation center. Eleven golfers and 12 control subjects (all male; mean age: 66.2 ± 6.8 and 71.3 ± 6.6 years, respectively) were recruited. Two balance control tests were administered: (1) functional reach test which measured subjects' maximum forward distance in standing; (2) sensory organization test (SOT) which examined subjects' abilities to use somatosensory, visual, and vestibular inputs to control body sway during stance. The modified Activities-specific Balance Confidence (ABC) determined subject's balance confidence in daily activities. The golfers were found to achieve significantly longer distance in the functional reach test than controls. They manifested significantly better balance than controls in the visual ratio and vestibular ratio, but not the somatosensory ratio of the SOT. The golfers also reported significantly higher balance confidence score ratios. Furthermore, older adults' modified ABC score ratios showed positive correlations with functional reach, visual and vestibular ratios, but not with somatosensory ratio. Golfing is an activity which may enhance both the physical and psychological aspects of balance control. Significant correlations between these measures reveal the importance of the balance control under reduced or conflicting sensory conditions in older adults' balance confidence in their daily activities. Since cause-and-effect could not be established in the present cross-sectional study, further prospective intervention design is warranted. PMID

  9. Variance and bias confidence criteria for ERA modal parameter identification. [Eigensystem Realization Algorithm

    NASA Technical Reports Server (NTRS)

    Longman, Richard W.; Bergmann, Martin; Juang, Jer-Nan

    1988-01-01

    For the ERA system identification algorithm, perturbation methods are used to develop expressions for variance and bias of the identified modal parameters. Based on the statistics of the measurement noise, the variance results serve as confidence criteria by indicating how likely the true parameters are to lie within any chosen interval about their identified values. This replaces the use of expensive and time-consuming Monte Carlo computer runs to obtain similar information. The bias estimates help guide the ERA user in his choice of which data points to use and how much data to use in order to obtain the best results, performing the trade-off between the bias and scatter. Also, when the uncertainty in the bias is sufficiently small, the bias information can be used to correct the ERA results. In addition, expressions for the variance and bias of the singular values serve as tools to help the ERA user decide the proper modal order.

  10. The effect of terrorism on public confidence : an exploratory study.

    SciTech Connect

    Berry, M. S.; Baldwin, T. E.; Samsa, M. E.; Ramaprasad, A.; Decision and Information Sciences

    2008-10-31

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the metrics it uses to assess the consequences of terrorist attacks. Hence, several factors--including a detailed understanding of the variations in public confidence among individuals, by type of terrorist event, and as a function of time--are critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data were collected from the groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery bombing, and cyber intrusions on financial institutions that resulted in identity theft and financial losses. Our findings include the following: (a) the subjects can be classified into at least three distinct groups on the basis of their baseline outlook--optimistic, pessimistic, and unaffected; (b) the subjects make discriminations in their interpretations of an event on the basis of the nature of a terrorist attack, the time horizon, and its impact; (c) the recovery of confidence after a terrorist event has an incubation period and typically does not return to its initial level in the long-term; (d) the patterns of recovery of confidence differ between the optimists and the pessimists; and (e) individuals are able to associate a monetary value with a loss or gain in confidence, and the value associated with a loss is greater than the value associated with a gain. These findings illustrate the importance the public places in their confidence in government

  11. Using spatially explicit surveillance models to provide confidence in the eradication of an invasive ant

    PubMed Central

    Ward, Darren F.; Anderson, Dean P.; Barron, Mandy C.

    2016-01-01

    Effective detection plays an important role in the surveillance and management of invasive species. Invasive ants are very difficult to eradicate and are prone to imperfect detection because of their small size and cryptic nature. Here we demonstrate the use of spatially explicit surveillance models to estimate the probability that Argentine ants (Linepithema humile) have been eradicated from an offshore island site, given their absence across four surveys and three surveillance methods, conducted since ant control was applied. The probability of eradication increased sharply as each survey was conducted. Using all surveys and surveillance methods combined, the overall median probability of eradication of Argentine ants was 0.96. There was a high level of confidence in this result, with a high Credible Interval Value of 0.87. Our results demonstrate the value of spatially explicit surveillance models for the likelihood of eradication of Argentine ants. We argue that such models are vital to give confidence in eradication programs, especially from highly valued conservation areas such as offshore islands. PMID:27721491

  12. Assessing confidence in Pliocene sea surface temperatures to evaluate predictive models

    USGS Publications Warehouse

    Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling M.; Stoll, Danielle K.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; Bragg, Fran J.; Lunt, Daniel J.; Foley, Kevin M.; Riesselman, Christina R.

    2012-01-01

    In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.3–3.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history. This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.

  13. Assessing Confidence in Pliocene Sea Surface Temperatures to Evaluate Predictive Models

    NASA Technical Reports Server (NTRS)

    Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling. M.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; Bragg, Fran J.; Lunt, Daniel J.; Stoll, Danielle K.; Foley, Kevin M.; Riesselman, Christina

    2012-01-01

    In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.33.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history.This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.

  14. Assessment of cross-frequency coupling with confidence using generalized linear models

    PubMed Central

    Kramer, M. A.; Eden, U. T.

    2013-01-01

    Background Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact – and the function of these interactions – remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist. New Method Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework. Results We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling. Comparison with Existing Methods Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence intervals that are easy and efficient to compute, and accurately detects biphasic coupling. Conclusions The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC. PMID:24012829

  15. Signatures of a Statistical Computation in the Human Sense of Confidence.

    PubMed

    Sanders, Joshua I; Hangya, Balázs; Kepecs, Adam

    2016-05-01

    Human confidence judgments are thought to originate from metacognitive processes that provide a subjective assessment about one's beliefs. Alternatively, confidence is framed in mathematics as an objective statistical quantity: the probability that a chosen hypothesis is correct. Despite similar terminology, it remains unclear whether the subjective feeling of confidence is related to the objective, statistical computation of confidence. To address this, we collected confidence reports from humans performing perceptual and knowledge-based psychometric decision tasks. We observed two counterintuitive patterns relating confidence to choice and evidence: apparent overconfidence in choices based on uninformative evidence, and decreasing confidence with increasing evidence strength for erroneous choices. We show that these patterns lawfully arise from statistical confidence, and therefore occur even for perfectly calibrated confidence measures. Furthermore, statistical confidence quantitatively accounted for human confidence in our tasks without necessitating heuristic operations. Accordingly, we suggest that the human feeling of confidence originates from a mental computation of statistical confidence. PMID:27151640

  16. Variation in Cancer Incidence among Patients with ESRD during Kidney Function and Nonfunction Intervals.

    PubMed

    Yanik, Elizabeth L; Clarke, Christina A; Snyder, Jon J; Pfeiffer, Ruth M; Engels, Eric A

    2016-05-01

    Among patients with ESRD, cancer risk is affected by kidney dysfunction and by immunosuppression after transplant. Assessing patterns across periods of dialysis and kidney transplantation may inform cancer etiology. We evaluated 202,195 kidney transplant candidates and recipients from a linkage between the Scientific Registry of Transplant Recipients and cancer registries, and compared incidence in kidney function intervals (time with a transplant) with incidence in nonfunction intervals (waitlist or time after transplant failure), adjusting for demographic factors. Incidence of infection-related and immune-related cancer was higher during kidney function intervals than during nonfunction intervals. Incidence was most elevated for Kaposi sarcoma (hazard ratio [HR], 9.1; 95% confidence interval (95% CI), 4.7 to 18), non-Hodgkin's lymphoma (HR, 3.2; 95% CI, 2.8 to 3.7), Hodgkin's lymphoma (HR, 3.0; 95% CI, 1.7 to 5.3), lip cancer (HR, 3.4; 95% CI, 2.0 to 6.0), and nonepithelial skin cancers (HR, 3.8; 95% CI, 2.5 to 5.8). Conversely, ESRD-related cancer incidence was lower during kidney function intervals (kidney cancer: HR, 0.8; 95% CI, 0.7 to 0.8 and thyroid cancer: HR, 0.7; 95% CI, 0.6 to 0.8). With each successive interval, incidence changed in alternating directions for non-Hodgkin's lymphoma, melanoma, and lung, pancreatic, and nonepithelial skin cancers (higher during function intervals), and kidney and thyroid cancers (higher during nonfunction intervals). For many cancers, incidence remained higher than in the general population across all intervals. These data indicate strong short-term effects of kidney dysfunction and immunosuppression on cancer incidence in patients with ESRD, suggesting a need for persistent cancer screening and prevention. PMID:26563384

  17. The dose delivery effect of the different Beam ON interval in FFF SBRT: TrueBEAM

    NASA Astrophysics Data System (ADS)

    Tawonwong, T.; Suriyapee, S.; Oonsiri, S.; Sanghangthum, T.; Oonsiri, P.

    2016-03-01

    The purpose of this study is to determine the dose delivery effect of the different Beam ON interval in Flattening Filter Free Stereotactic Body Radiation Therapy (FFF-SBRT). The three 10MV-FFF SBRT plans (2 half rotating Rapid Arc, 9 to10 Gray/Fraction) were selected and irradiated in three different intervals (100%, 50% and 25%) using the RPM gating system. The plan verification was performed by the ArcCHECK for gamma analysis and the ionization chamber for point dose measurement. The dose delivery time of each interval were observed. For gamma analysis (2%&2mm criteria), the average percent pass of all plans for 100%, 50% and 25% intervals were 86.1±3.3%, 86.0±3.0% and 86.1±3.3%, respectively. For point dose measurement, the average ratios of each interval to the treatment planning were 1.012±0.015, 1.011±0.014 and 1.011±0.013 for 100%, 50% and 25% interval, respectively. The average dose delivery time was increasing from 74.3±5.0 second for 100% interval to 154.3±12.6 and 347.9±20.3 second for 50% and 25% interval, respectively. The same quality of the dose delivery from different Beam ON intervals in FFF-SBRT by TrueBEAM was illustrated. While the 100% interval represents the breath-hold treatment technique, the differences for the free-breathing using RPM gating system can be treated confidently.

  18. Interval and Contour Processing in Autism

    ERIC Educational Resources Information Center

    Heaton, Pamela

    2005-01-01

    High functioning children with autism and age and intelligence matched controls participated in experiments testing perception of pitch intervals and musical contours. The finding from the interval study showed superior detection of pitch direction over small pitch distances in the autism group. On the test of contour discrimination no group…

  19. SINGLE-INTERVAL GAS PERMEABILITY ESTIMATION

    EPA Science Inventory

    Single-interval, steady-steady-state gas permeability testing requires estimation of pressure at a screened interval which in turn requires measurement of friction factors as a function of mass flow rate. Friction factors can be obtained by injecting air through a length of pipe...

  20. Dealing confidently with IRS, Part I: Preparing for IRS audits.

    PubMed

    Holub, S F; Walker, S R

    1978-10-01

    With the IRS apparently making health care institutions the focus of a nationwide audit emphasis, hospital administrators will want to prepare themselves for confident handling of audits. Four types of audit procedures are explained, suggestions are made for getting a hospital ready for an audit, and strategies are suggested for maintaining control over the audit's progress.

  1. Effects of parental divorce on marital commitment and confidence.

    PubMed

    Whitton, Sarah W; Rhoades, Galena K; Stanley, Scott M; Markman, Howard J

    2008-10-01

    Research on the intergenerational transmission of divorce has demonstrated that compared with offspring of nondivorced parents, those of divorced parents generally have more negative attitudes toward marriage as an institution and are less optimistic about the feasibility of a long-lasting, healthy marriage. It is also possible that when entering marriage themselves, adults whose parents divorced have less personal relationship commitment to their own marriages and less confidence in their own ability to maintain a happy marriage with their spouse. However, this prediction has not been tested. In the current study, we assessed relationship commitment and relationship confidence, as well as parental divorce and retrospectively reported interparental conflict, in a sample of 265 engaged couples prior to their first marriage. Results demonstrated that women's, but not men's, parental divorce was associated with lower relationship commitment and lower relationship confidence. These effects persisted when controlling for the influence of recalled interparental conflict and premarital relationship adjustment. The current findings suggest that women whose parents divorced are more likely to enter marriage with relatively lower commitment to, and confidence in, the future of those marriages, potentially raising their risk for divorce.

  2. Analyzing Student Confidence in Classroom Voting with Multiple Choice Questions

    ERIC Educational Resources Information Center

    Stewart, Ann; Storm, Christopher; VonEpps, Lahna

    2013-01-01

    The purpose of this paper is to present results of a recent study in which students voted on multiple choice questions in mathematics courses of varying levels. Students used clickers to select the best answer among the choices given; in addition, they were also asked whether they were confident in their answer. In this paper we analyze data…

  3. Unexpected arousal modulates the influence of sensory noise on confidence

    PubMed Central

    Allen, Micah; Frank, Darya; Schwarzkopf, D Samuel; Fardo, Francesca; Winston, Joel S; Hauser, Tobias U; Rees, Geraint

    2016-01-01

    Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. DOI: http://dx.doi.org/10.7554/eLife.18103.001 PMID:27776633

  4. The Dark and Bloody Mystery: Building Basic Writers' Confidence.

    ERIC Educational Resources Information Center

    Sledd, Robert

    While the roots of students' fear of writing go deep, students fear most the surface of writing. They fear that a person's language indicates the state not only of the mind but of the soul--thus their writing can make them look stupid and morally depraved. This fear of error and lack of confidence prevent students from developing a command of the…

  5. Utility of de-escalatory confidence-building measures

    SciTech Connect

    Nation, J.

    1989-06-01

    This paper evaluates the utility of specific confidence-building de-escalatory measures and pays special attention to the evaluation of measures which place restrictions on or establish procedures for strategic forces. Some measures appear more promising than others. Potentially useful confidence-building measures largely satisfy defined criteria and include the phased return of strategic nuclear forces to peacetime bases and operations, the termination of interference with communications and NTMs (National Technical Means) and the termination of civil defense preparations. Less-promising CBMs include the standing down of supplemental early warning systems, the establishment of SSBN keep-out zones, and decreases in bomber alert rates. Establishment of SSBN keep-out zones and reduction in bomber rates are difficult to verify, while the standing-down of early warning systems provides little benefit at potentially large costs. Particular confidence-building measures (CBMs) may be most useful in building superpower confidence at specific points in the crisis termination phase. For example, a decrease in strategic bomber alert rates may provide some decrease in perception of the likelihood of war, but its potential costs, particularly in increasing bomber vulnerability, may limit its utility and implementation to the final crisis stages when the risks of re-escalation and surprise attack are lower.

  6. Measuring Academic Behavioural Confidence: The ABC Scale Revisited

    ERIC Educational Resources Information Center

    Sander, Paul; Sanders, Lalage

    2009-01-01

    The Academic Behavioural Confidence (ABC) scale has been shown to be valid and can be useful to teachers in understanding their students, enabling the design of more effective teaching sessions with large cohorts. However, some of the between-group differences have been smaller than expected, leading to the hypothesis that the ABC scale many not…

  7. North Dakota Leadership Training Boosts Confidence and Involvement

    ERIC Educational Resources Information Center

    Flage, Lynette; Hvidsten, Marie; Vettern, Rachelle

    2012-01-01

    Effective leadership is critical for communities as they work to maintain their vitality and sustainability for years to come. The purpose of the study reported here was to assess confidence levels and community engagement of community leadership program participants in North Dakota State University Extension programs. Through a survey…

  8. Using Confidence as Feedback in Multi-Sized Learning Environments

    ERIC Educational Resources Information Center

    Hench, Thomas L.

    2014-01-01

    This paper describes the use of existing confidence and performance data to provide feedback by first demonstrating the data's fit to a simple linear model. The paper continues by showing how the model's use as a benchmark provides feedback to allow current or future students to infer either the difficulty or the degree of under or over…

  9. Highly Confident Wrong Answering--And How to Detect It.

    ERIC Educational Resources Information Center

    Yule, George

    1988-01-01

    A Confidence-rating scale accompanying answers on a listening test helps distinguish between learners who select answers based on effective self-monitoring and those whose answers are based on poor self-monitoring. The latter are more likely to do so subsequently as well. Test items and a rating scale are illustrated. (Author/LMO)

  10. Gender Difference of Confidence in Using Technology for Learning

    ERIC Educational Resources Information Center

    Yau, Hon Keung; Cheng, Alison Lai Fong

    2012-01-01

    Past studies have found male students to have more confidence in using technology for learning than do female students. Males tend to have more positive attitudes about the use of technology for learning than do females. According to the Women's Foundation (2006), few studies examined gender relevant research in Hong Kong. It also appears that no…

  11. Building and Encouraging Confidence and Creativity in Science.

    ERIC Educational Resources Information Center

    Ryan, Lynnette J.

    The focus of this study is an eight-week science enrichment mentorship program for elementary and middle school girls (ages 8 to 13) at Coleson Village, a public housing community, in an urban area of western Washington. The goal of the program was to build confidence and encourage creativity as the participants discovered themselves as competent…

  12. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  13. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  14. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  15. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  16. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  17. Hypercorrection of high confidence errors in lexical representations.

    PubMed

    Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa

    2013-08-01

    Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly. PMID:24422352

  18. Disconnections between Teacher Expectations and Student Confidence in Bioethics

    ERIC Educational Resources Information Center

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-01-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students' general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important…

  19. Family Background, Self-Confidence and Economic Outcomes

    ERIC Educational Resources Information Center

    Filippin, Antonio; Paccagnella, Marco

    2012-01-01

    In this paper we analyze the role played by self-confidence, modeled as beliefs about one's ability, in shaping task choices. We propose a model in which fully rational agents exploit all the available information to update their beliefs using Bayes' rule, eventually learning their true type. We show that when the learning process does not…

  20. 21 CFR 26.37 - Confidence building activities.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Confidence building activities. 26.37 Section 26.37 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY...

  1. Confidence bands for measured economically optimal nitrogen rates

    Technology Transfer Automated Retrieval System (TEKTRAN)

    While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...

  2. Panel Discussion and the Development of Students' Self Confidence

    ERIC Educational Resources Information Center

    Anwar, Khoirul

    2016-01-01

    This study is to analyze the use of panel discussion towards the development of students' self confidence in learning the content subject of qualitative research concept. The study uses mix-method in which questionnaire and interview are conducted at the class of qualitative research of the sixth semester consisting twenty students especially…

  3. On Pupils' Self-Confidence in Mathematics: Gender Comparisons

    ERIC Educational Resources Information Center

    Nurmi, Anu; Hannula, Markku; Maijala, Hanna; Pehkonen, Erkki

    2003-01-01

    In this paper we will concentrate on pupils' self-confidence in mathematics, which belongs to pupils' mathematical beliefs in themselves, and beliefs on achievement in mathematics. Research described consists of a survey of more than 3000 fifth-graders and seventh-graders. Furthermore, 40 pupils participated in a qualitative follow-up study…

  4. Hypercorrection of high confidence errors in lexical representations.

    PubMed

    Iwaki, Nobuyoshi; Matsushima, Hiroko; Kodaira, Kazumasa

    2013-08-01

    Memory errors associated with higher confidence are more likely to be corrected than errors made with lower confidence, a phenomenon called the hypercorrection effect. This study investigated whether the hypercorrection effect occurs with phonological information of lexical representations. In Experiment 1, 15 participants performed a Japanese Kanji word-reading task, in which the words had several possible pronunciations. In the initial task, participants were required to read aloud each word and indicate their confidence in their response; this was followed by receipt of visual feedback of the correct response. A hypercorrection effect was observed, indicating generality of this effect beyond previous observations in memories based upon semantic or episodic representations. This effect was replicated in Experiment 2, in which 40 participants performed the same task as in Experiment 1. When the participant's ratings of the practical value of the words were controlled, a partial correlation between confidence and likelihood of later correcting the initial mistaken response was reduced. This suggests that the hypercorrection effect may be partially caused by an individual's recognition of the practical value of reading the words correctly.

  5. Multiple Confidence Estimates as Indices of Eyewitness Memory

    ERIC Educational Resources Information Center

    Sauer, James D.; Brewer, Neil; Weber, Nathan

    2008-01-01

    Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated.…

  6. State FFA Officers' Confidence and Trustworthiness of Biotechnology Information Sources

    ERIC Educational Resources Information Center

    Wingenbach, Gary J.; Rutherford, Tracy A.

    2007-01-01

    Are state FFA officers' awareness levels of agricultural topics reported in mass media superior to those who do not serve in leadership roles? The purpose of this study was to determine elected state FFA officers' awareness of biotechnology, and their confidence and trust of biotechnology information sources. Descriptive survey methods were used…

  7. Building Confident Teachers: Preservice Physical Education Teachers' Efficacy Beliefs

    ERIC Educational Resources Information Center

    Hand, Karen E.

    2014-01-01

    Understanding teachers' perceptions of their abilities across a variety of teaching strategies can provide insight for understanding teaching effectiveness and program review. Teaching efficacy reflects the degrees of confidence individuals have in their ability to successfully perform specific teaching proficiencies (Bandura, 1986). Additional…

  8. Does Students' Confidence in Their Ability in Mathematics Matter?

    ERIC Educational Resources Information Center

    Parsons, Sarah; Croft, Tony; Harrison, Martin

    2009-01-01

    Research was conducted into first year engineering students' learning of mathematics in a university college during 2005-2007. The aims were to understand better students' confidences and explore which factors affected performance and how these were inter-related. Questionnaires were administered which posed questions regarding previous…

  9. Building Academic Confidence in English Language Learners in Elementary School

    ERIC Educational Resources Information Center

    Vazquez, Alejandra

    2014-01-01

    Non-English speaking students lack the confidence and preparation to be verbally actively engaged in the classroom. Students may frequently display hesitation in learning to speak English, and may also lack a teacher's guidance in becoming proficient English speakers. The purpose of this research is to examine how teachers build academic…

  10. Test Anxiety Reduction and Confidence Training: A Replication

    ERIC Educational Resources Information Center

    Bowman, Noah; Driscoll, Richard

    2013-01-01

    This study was undertaken to replicate prior research in which a brief counter-conditioning and confidence training program was found to reduce anxiety and raise test scores. First-semester college students were screened with the Westside Test Anxiety Scale, and the 25 identified as having high or moderately-high anxiety were randomly divided…

  11. Expanding Horizons--Into the Future with Confidence!

    ERIC Educational Resources Information Center

    Volk, Valerie

    2006-01-01

    Gifted students often show a deep interest in and profound concern for the complex issues of society. Given the leadership potential of these students and their likely responsibility for solving future social problems, they need to develop this awareness and also a sense of confidence in dealing with future issues. The Future Problem Solving…

  12. Knowledge Surveys in General Chemistry: Confidence, Overconfidence, and Performance

    ERIC Educational Resources Information Center

    Bell, Priscilla; Volckmann, David

    2011-01-01

    Knowledge surveys have been used in a number of fields to assess changes in students' understanding of their own learning and to assist students in review. This study compares metacognitive confidence ratings of students faced with problems on the surveys with their actual knowledge as shown on the final exams in two courses of general chemistry…

  13. Locus of Control and Perceived Confidence in Problem Solving Abilities

    ERIC Educational Resources Information Center

    Johnson, Barry L.; Kilmann, Peter R.

    1975-01-01

    Butterfield found that internal Ss tended to make more constructive responses to frustration-type situations than did extrenal Ss. Therefore, this study predicted that internal Ss would rate themselves as more confident with regard to problem-solving abilities than would external Ss. (Author)

  14. 21 CFR 26.37 - Confidence building activities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Confidence building activities. 26.37 Section 26.37 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL... during the transition period; (4) Joint training exercises; and (5) Observed inspections. (c) During...

  15. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  16. Interval colorectal carcinoma: An unsolved debate

    PubMed Central

    Benedict, Mark; Neto, Antonio Galvao; Zhang, Xuchen

    2015-01-01

    Colorectal carcinoma (CRC), as the third most common new cancer diagnosis, poses a significant health risk to the population. Interval CRCs are those that appear after a negative screening test or examination. The development of interval CRCs has been shown to be multifactorial: location of exam-academic institution versus community hospital, experience of the endoscopist, quality of the procedure, age of the patient, flat versus polypoid neoplasia, genetics, hereditary gastrointestinal neoplasia, and most significantly missed or incompletely excised lesions. The rate of interval CRCs has decreased in the last decade, which has been ascribed to an increased understanding of interval disease and technological advances in the screening of high risk individuals. In this article, we aim to review the literature with regard to the multifactorial nature of interval CRCs and provide the most recent developments regarding this important gastrointestinal entity. PMID:26668498

  17. Confidence bands for distribution functions when parameters are estimated from the data: a non-Monte-Carlo approach.

    PubMed

    Rosenkrantz, Walter A

    2013-05-01

    A method is given for computing simultaneous confidence intervals for order statistics obtained from a distribution depending on one or more parameters that must be estimated from the data. This produces a confidence band for the distribution itself and may be regarded as an extension of Kolmogorov's goodness-of-fit test to the case where the distribution depends on parameters that must be estimated from the data. The method works whenever the joint confidence set for the parameters is convex and the quantile function is linear in the parameters. Two special cases are treated in some detail: the normal and exponential distributions. Graphical representations and comparisons with results obtained by Lillifors and Stephens via Monte-Carlo methods are discussed. An unusual feature of this paper is that we found it necessary to first prove that the joint confidence set for the mean and variance for the normal distribution based on the Wald statistic is convex and compact. Our proof relies on an elementary theorem from differential geometry in the large due to Hopf and is of independent interest. PMID:23180516

  18. Improving maternal confidence in neonatal care through a checklist intervention.

    PubMed

    Radenkovic, Dina; Kotecha, Shrinal; Patel, Shreena; Lakhani, Anjali; Reimann-Dubbers, Katharina; Shah, Shreya; Jafree, Daniyal; Mitrasinovic, Stefan; Whitten, Melissa

    2016-01-01

    Previous qualitative studies suggest a lack of maternal confidence in care of their newborn child upon discharge into the community. This observation was supported by discussion with healthcare professionals and mothers at University College London Hospital (UCLH), highlighting specific areas of concern, in particular identifying and managing common neonatal presentations. The aim of this study was to design and introduce a checklist, addressing concerns, to increase maternal confidence in care of their newborn child. Based on market research, an 8-question checklist was designed, assessing maternal confidence in: feeding, jaundice, nappy care, rashes and dry skin, umbilical cord care, choking, bowel movements, and vomiting. Mothers were assessed as per the checklist, and received a score representative of their confidence in neonatal care. Mothers were followed up with a telephone call, and were assessed after a 7-day-period. Checklist scores before as compared to after the follow-up period were analysed. This process was repeated for three study cycles, with the placement of information posters on the ward prior to the second study cycle, and the stapling of the checklist to the mother's personal child health record (PCHR) prior to the third study cycle. A total of 99 mothers on the Maternity Care Unit at UCLH were enrolled in the study, and 92 were contactable after a 7-day period. During all study cycles, a significant increase in median checklist score was observed after, as compared to before, the 7-day follow up period (p < 0.001). The median difference in checklist score from baseline was greatest for the third cycle. These results suggest that introduction of a simple checklist can be successfully utilised to improve confidence of mothers in being able to care for their newborn child. Further investigation is indicated, but this intervention has the potential for routine application in postnatal care. PMID:27335642

  19. Relating the Content and Confidence of Recognition Judgments

    PubMed Central

    Selmeczy, Diana; Dobbins, Ian G.

    2014-01-01

    The Remember/Know procedure, developed by Tulving (1985) to capture the distinction between the conscious correlates of episodic and semantic retrieval, has spurned considerable research and debate. However, only a handful of reports have examined the recognition content beyond this dichotomous simplification. To address this, we collected participants’ written justifications in support of ordinary old/new recognition decisions accompanied by confidence ratings using a 3-point scale (high/medium/low). Unlike prior research, we did not provide the participants with any descriptions of Remembering or Knowing and thus, if the justifications mapped well onto theory, they would do so spontaneously. Word frequency analysis (unigrams, bigrams, and trigrams), independent ratings, and machine learning techniques (Support Vector Machine - SVM) converged in demonstrating that the linguistic content of high and medium confidence recognition differs in a manner consistent with dual process theories of recognition. For example, the use of ‘I remember’, particularly when combined with temporal or perceptual information (e.g., ‘when’, ‘saw’, ‘distinctly’), was heavily associated with high confidence recognition. Conversely, participants also used the absence of remembering for personally distinctive materials as support for high confidence new reports (‘would have remembered’). Thus, participants afford a special status to the presence or absence of remembering and use this actively as a basis for high confidence during recognition judgments. Additionally, the pattern of classification successes and failures of a SVM was well anticipated by the Dual Process Signal Detection model of recognition and inconsistent with a single process, strictly unidimensional approach. “One might think that memory should have something to do with remembering, and remembering is a conscious experience.”(Tulving, 1985, p. 1) PMID:23957366

  20. Confidence bounds for the estimation of the volume phase fraction from a single image in a nickel base superalloy.

    PubMed

    Blanc, Rémi; Baylou, Pierre; Germain, Christian; Da Costa, Jean-Pierre

    2010-06-01

    We propose an image-based framework to evaluate the uncertainty in the estimation of the volume fraction of specific microstructures based on the observation of a single section. These microstructures consist of cubes organized on a cubic mesh, such as monocrystalline nickel base superalloys. The framework is twofold: a model-based stereological analysis allows relating two-dimensional image observations to three-dimensional microstructure features, and a spatial statistical analysis allows computing approximate confidence bounds while assessing the representativeness of the image. The reliability of the method is assessed on synthetic models. Volume fraction estimation variances and approximate confidence intervals are computed on real superalloy images in the context of material characterization. PMID:20350338