Science.gov

Sample records for 95-percent confidence interval

  1. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  2. Effect Sizes, Confidence Intervals, and Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Thompson, Bruce

    2007-01-01

    The present article provides a primer on (a) effect sizes, (b) confidence intervals, and (c) confidence intervals for effect sizes. Additionally, various admonitions for reformed statistical practice are presented. For example, a very important implication of the realization that there are dozens of effect size statistics is that "authors must…

  3. Teaching Confidence Intervals Using Simulation

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2008-01-01

    Confidence intervals are difficult to teach, in part because most students appear to believe they understand how to interpret them intuitively. They rarely do. To help them abandon their misconception and achieve understanding, we have developed a simulation tool that encourages experimentation with multiple confidence intervals derived from the…

  4. A Review of Confidence Intervals.

    ERIC Educational Resources Information Center

    Mauk, Anne-Marie Kimbell

    This paper summarizes information leading to the recommendation that statistical significance testing be replaced, or at least accompanied by, the reporting of effect sizes and confidence intervals. It discusses the use of confidence intervals, noting that the recent report of the American Psychological Association Task Force on Statistical…

  5. Confidence Intervals in Qtl Mapping by Bootstrapping

    PubMed Central

    Visscher, P. M.; Thompson, R.; Haley, C. S.

    1996-01-01

    The determination of empirical confidence intervals for the location of quantitative trait loci (QTLs) was investigated using simulation. Empirical confidence intervals were calculated using a bootstrap resampling method for a backcross population derived from inbred lines. Sample sizes were either 200 or 500 individuals, and the QTL explained 1, 5, or 10% of the phenotypic variance. The method worked well in that the proportion of empirical confidence intervals that contained the simulated QTL was close to expectation. In general, the confidence intervals were slightly conservatively biased. Correlations between the test statistic and the width of the confidence interval were strongly negative, so that the stronger the evidence for a QTL segregating, the smaller the empirical confidence interval for its location. The size of the average confidence interval depended heavily on the population size and the effect of the QTL. Marker spacing had only a small effect on the average empirical confidence interval. The LOD drop-off method to calculate empirical support intervals gave confidence intervals that generally were too small, in particular if confidence intervals were calculated only for samples above a certain significance threshold. The bootstrap method is easy to implement and is useful in the analysis of experimental data. PMID:8725246

  6. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... average measured CE value to the endpoints of the 95-percent (two-sided) confidence interval for the... measured CE value to the endpoints of the 95-percent (two-sided) confidence interval, expressed as...

  7. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... average measured CE value to the endpoints of the 95-percent (two-sided) confidence interval for the... measured CE value to the endpoints of the 95-percent (two-sided) confidence interval, expressed as...

  8. 40 CFR Appendix A to Subpart Kk of... - Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... average measured CE value to the endpoints of the 95-percent (two-sided) confidence interval for the... measured CE value to the endpoints of the 95-percent (two-sided) confidence interval, expressed as...

  9. Constructing Confidence Intervals for Qtl Location

    PubMed Central

    Mangin, B.; Goffinet, B.; Rebai, A.

    1994-01-01

    We describe a method for constructing the confidence interval of the QTL location parameter. This method is developed in the local asymptotic framework, leading to a linear model at each position of the putative QTL. The idea is to construct a likelihood ratio test, using statistics whose asymptotic distribution does not depend on the nuisance parameters and in particular on the effect of the QTL. We show theoretical properties of the confidence interval built with this test, and compare it with the classical confidence interval using simulations. We show in particular, that our confidence interval has the correct probability of containing the true map location of the QTL, for almost all QTLs, whereas the classical confidence interval can be very biased for QTLs having small effect. PMID:7896108

  10. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  11. Coefficient Alpha Bootstrap Confidence Interval under Nonnormality

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew

    2012-01-01

    Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…

  12. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  13. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures.

    PubMed

    Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K

    2016-01-01

    For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

  14. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    PubMed Central

    Bebu, Ionut; Luta, George; Mathew, Thomas; Agan, Brian K.

    2016-01-01

    For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. PMID:27322305

  15. Confidence Intervals Make a Difference: Effects of Showing Confidence Intervals on Inferential Reasoning

    ERIC Educational Resources Information Center

    Hoekstra, Rink; Johnson, Addie; Kiers, Henk A. L.

    2012-01-01

    The use of confidence intervals (CIs) as an addition or as an alternative to null hypothesis significance testing (NHST) has been promoted as a means to make researchers more aware of the uncertainty that is inherent in statistical inference. Little is known, however, about whether presenting results via CIs affects how readers judge the…

  16. CONFIDENCE INTERVALS AND STANDARD ERROR INTERVALS: WHAT DO THEY MEAN IN TERMS OF STATISTICAL SIGNIFICANCE?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We investigate the use of confidence intervals and standard error intervals to draw conclusions regarding tests of hypotheses about normal population means. Mathematical expressions and algebraic manipulations are given, and computer simulations are performed to assess the usefulness of confidence ...

  17. IET. Aerial view of project, 95 percent complete. Camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Aerial view of project, 95 percent complete. Camera facing east. Left to right: stack, duct, mobile test cell building (TAN-624), four-rail track, dolly. Retaining wall between mobile test building and shielded control building (TAN-620) just beyond. North of control building are tank building (TAN-627) and fuel-transfer pump building (TAN-625). Guard house at upper right along exclusion fence. Construction vehicles and temporary warehouse in view near guard house. Date: June 6, 1955. INEEL negative no. 55-1462 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  18. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  19. Sample Size for the "Z" Test and Its Confidence Interval

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2012-01-01

    The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)

  20. Bootstrapping Confidence Intervals for Robust Measures of Association.

    ERIC Educational Resources Information Center

    King, Jason E.

    A Monte Carlo simulation study was conducted to determine the bootstrap correction formula yielding the most accurate confidence intervals for robust measures of association. Confidence intervals were generated via the percentile, adjusted, BC, and BC(a) bootstrap procedures and applied to the Winsorized, percentage bend, and Pearson correlation…

  1. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  2. A Note on Confidence Interval Estimation and Margin of Error

    ERIC Educational Resources Information Center

    Gilliland, Dennis; Melfi, Vince

    2010-01-01

    Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

  3. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  4. Improved central confidence intervals for the ratio of Poisson means

    NASA Astrophysics Data System (ADS)

    Cousins, R. D.

    The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.

  5. Estimation of confidence intervals for federal waterfowl harvest surveys

    USGS Publications Warehouse

    Geissler, P.H.

    1990-01-01

    I developed methods of estimating confidence intervals for the federal waterfowl harvest surveys conducted by the U.S. Fish and Wildlife Service (USFWS). I estimated flyway harvest confidence intervals for mallards (Anas platyrhynchos) (95% CI are .+-. 8% of the estimate). Canada geese (Branta canadensis) (.+-. 11%), black ducks (Anas rubripes) (.+-. 16%), canvasbacks (Aythya valisineria) (.+-. 32%), snow geese (Chen caerulescens) (.+-. 43%), and brant (Branta bernicla) (.+-. 46%). Differences between annual estimate of 10, 13, 22, 42, 43, and 58% could be detected with mallards, Canada geese, black ducks, canvasbacks, snow geese, and brant, respectively. Estimated confidence intervals for state harvests tended to be much larger than those for the flyway estimates.

  6. Inference by Eye: Pictures of Confidence Intervals and Thinking about Levels of Confidence

    ERIC Educational Resources Information Center

    Cumming, Geoff

    2007-01-01

    A picture of a 95% confidence interval (CI) implicitly contains pictures of CIs of all other levels of confidence, and information about the "p"-value for testing a null hypothesis. This article discusses pictures, taken from interactive software, that suggest several ways to think about the level of confidence of a CI, "p"-values, and what…

  7. Confidence Intervals for Error Rates Observed in Coded Communications Systems

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2015-05-01

    We present methods to compute confidence intervals for the codeword error rate (CWER) and bit error rate (BER) of a coded communications link. We review several methods to compute exact and approximate confidence intervals for the CWER, and specifically consider the situation in which the true CWER is so low that only a handful, if any, codeword errors are able to be simulated. In doing so, we answer the question of how long an error-free simulation must be run in order to certify that a given CWER requirement is met with a given level of confidence, and discuss the bias introduced by aborting a simulation after observing the first codeword error. Next, we turn to the lesser studied problem of determining confidence intervals for the BER of coded systems. Since bit errors in systems that use coding or higher-order modulation do not occur independently, blind application of a method that assumes independence leads to inappropriately narrow confidence intervals. We present a new method to compute the confidence interval properly, using the first and second sample moments of the number of bit errors per codeword. This is the first method we know of to compute a confidence interval for the BER of a coded or higher-order modulation system.

  8. Analysis of regression confidence intervals and Bayesian credible intervals for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ye, Ming; Hill, Mary C.

    2012-09-01

    Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi-Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized-nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for

  9. Fast and Accurate Construction of Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  10. Confidence intervals for concentration and brightness from fluorescence fluctuation measurements.

    PubMed

    Pryse, Kenneth M; Rong, Xi; Whisler, Jordan A; McConnaughey, William B; Jiang, Yan-Fei; Melnykov, Artem V; Elson, Elliot L; Genin, Guy M

    2012-09-01

    The theory of photon count histogram (PCH) analysis describes the distribution of fluorescence fluctuation amplitudes due to populations of fluorophores diffusing through a focused laser beam and provides a rigorous framework through which the brightnesses and concentrations of the fluorophores can be determined. In practice, however, the brightnesses and concentrations of only a few components can be identified. Brightnesses and concentrations are determined by a nonlinear least-squares fit of a theoretical model to the experimental PCH derived from a record of fluorescence intensity fluctuations. The χ(2) hypersurface in the neighborhood of the optimum parameter set can have varying degrees of curvature, due to the intrinsic curvature of the model, the specific parameter values of the system under study, and the relative noise in the data. Because of this varying curvature, parameters estimated from the least-squares analysis have varying degrees of uncertainty associated with them. There are several methods for assigning confidence intervals to the parameters, but these methods have different efficacies for PCH data. Here, we evaluate several approaches to confidence interval estimation for PCH data, including asymptotic standard error, likelihood joint-confidence region, likelihood confidence intervals, skew-corrected and accelerated bootstrap (BCa), and Monte Carlo residual resampling methods. We study these with a model two-dimensional membrane system for simplicity, but the principles are applicable as well to fluorophores diffusing in three-dimensional solution. Using simulated fluorescence fluctuation data, we find the BCa method to be particularly well-suited for estimating confidence intervals in PCH analysis, and several other methods to be less so. Using the BCa method and additional simulated fluctuation data, we find that confidence intervals can be reduced dramatically for a specific non-Gaussian beam profile. PMID:23009839

  11. Researchers Misunderstand Confidence Intervals and Standard Error Bars

    ERIC Educational Resources Information Center

    Belia, Sarah; Fidler, Fiona; Williams, Jennifer; Cumming, Geoff

    2005-01-01

    Little is known about researchers' understanding of confidence intervals (CIs) and standard error (SE) bars. Authors of journal articles in psychology, behavioral neuroscience, and medicine were invited to visit a Web site where they adjusted a figure until they judged 2 means, with error bars, to be just statistically significantly different (p…

  12. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population effect size…

  13. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  14. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  15. Confidence Intervals and Replication: Where Will the Next Mean Fall?

    ERIC Educational Resources Information Center

    Cumming, Geoff; Maillardet, Robert

    2006-01-01

    Confidence intervals (CIs) give information about replication, but many researchers have misconceptions about this information. One problem is that the percentage of future replication means captured by a particular CI varies markedly, depending on where in relation to the population mean that CI falls. The authors investigated the distribution of…

  16. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  17. Finite sampling corrected 3D noise with confidence intervals.

    PubMed

    Haefner, David P; Burks, Stephen D

    2015-05-20

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density for noise in imaging systems known as 3D noise. The goal was to decompose the 3D noise process into spatial and temporal components identify potential sources of origin. To characterize a sensor in terms of its 3D noise values, a finite number of samples in each of the three dimensions (two spatial, one temporal) were performed. In this correspondence, we developed the full sampling corrected 3D noise measurement and the corresponding confidence bounds. The accuracy of these methods was demonstrated through Monte Carlo simulations. Both the sampling correction as well as the confidence intervals can be applied a posteriori to the classic 3D noise calculation. The Matlab functions associated with this work can be found on the Mathworks file exchange ["Finite sampling corrected 3D noise with confidence intervals," https://www.mathworks.com/matlabcentral/fileexchange/49657-finite-sampling-corrected-3d-noise-with-confidence-intervals.]. PMID:26192530

  18. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input

  19. An Empirical Method for Establishing Positional Confidence Intervals Tailored for Composite Interval Mapping of QTL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improved genetic resolution and availability of sequenced genomes have made positional cloning of moderate-effect QTL (quantitative trait loci) realistic in several systems, emphasizing the need for precise and accurate derivation of positional confidence intervals (CIs). Support interval (SI) meth...

  20. Flood frequency analysis: Confidence interval estimation by test inversion bootstrapping

    NASA Astrophysics Data System (ADS)

    Schendel, Thomas; Thongwichian, Rossukon

    2015-09-01

    A common approach to estimate extreme flood events is the annual block maxima approach, where for each year the peak streamflow is determined and a distribution (usually the generalized extreme value distribution (GEV)) is fitted to this series of maxima. Eventually this distribution is used to estimate the return level for a defined return period. However, due to the finite sample size, the estimated return levels are associated with a range of uncertainity, usually expressed via confidence intervals. Previous publications have shown that existing bootstrapping methods for estimating the confidence intervals of the GEV yield too narrow estimates of these uncertainty ranges. Therefore, we present in this article a novel approach based on the less known test inversion bootstrapping, which we adapted especially for complex quantities like the return level. The reliability of this approach is studied and its performance is compared to other bootstrapping methods as well as the Profile Likelihood technique. It is shown that the new approach improves significantly the coverage of confidence intervals compared to other bootstrapping methods and for small sample sizes should even be favoured over the Profile Likelihood.

  1. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Stedinger, J.R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient "weighting" procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed-form method has been available for quantifying the uncertainty of EMA-based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood-quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25- to 100-year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  2. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  3. On Efficient Confidence Intervals for the Log-Normal Mean

    NASA Astrophysics Data System (ADS)

    Chami, Peter; Antoine, Robin; Sahai, Ashok

    Data obtained in biomedical research is often skewed. Examples include the incubation period of diseases like HIV/AIDS and the survival times of cancer patients. Such data, especially when they are positive and skewed, is often modeled by the log-normal distribution. If this model holds, then the log transformation produces a normal distribution. We consider the problem of constructing confidence intervals for the mean of the log-normal distribution. Several methods for doing this are known, including at least one estimator that performed better than Coxxs method for small sample sizes. We also construct a modified version of Coxxs method. Using simulation, we show that, when the sample size exceeds 30, it leads to confidence intervals that have good overall properties and are better than Coxxs method. More precisely, the actual coverage probability of our method is closer to the nominal coverage probability than is the case with Coxxs method. In addition, the new method is computationally much simpler than other well-known methods.

  4. Covariate-adjusted confidence interval for the intraclass correlation coefficient.

    PubMed

    Shoukri, Mohamed M; Donner, Allan; El-Dali, Abdelmoneim

    2013-09-01

    A crucial step in designing a new study is to estimate the required sample size. For a design involving cluster sampling, the appropriate sample size depends on the so-called design effect, which is a function of the average cluster size and the intracluster correlation coefficient (ICC). It is well-known that under the framework of hierarchical and generalized linear models, a reduction in residual error may be achieved by including risk factors as covariates. In this paper we show that the covariate design, indicating whether the covariates are measured at the cluster level or at the within-cluster subject level affects the estimation of the ICC, and hence the design effect. Therefore, the distinction between these two types of covariates should be made at the design stage. In this paper we use the nested-bootstrap method to assess the accuracy of the estimated ICC for continuous and binary response variables under different covariate structures. The codes of two SAS macros are made available by the authors for interested readers to facilitate the construction of confidence intervals for the ICC. Moreover, using Monte Carlo simulations we evaluate the relative efficiency of the estimators and evaluate the accuracy of the coverage probabilities of a 95% confidence interval on the population ICC. The methodology is illustrated using a published data set of blood pressure measurements taken on family members. PMID:23871746

  5. Comparing Simultaneous and Pointwise Confidence Intervals for Hydrological Processes

    PubMed Central

    2016-01-01

    Distribution function estimation of the random variable of river flow is an important problem in hydrology. This issue is directly related to quantile estimation, and consequently to return level prediction. The estimation process can be complemented with the construction of confidence intervals (CIs) to perform a probabilistic assessment of the different variables and/or estimated functions. In this work, several methods for constructing CIs using bootstrap techniques, and parametric and nonparametric procedures in the estimation process are studied and compared. In the case that the target is the joint estimation of a vector of values, some new corrections to obtain joint coverage probabilities closer to the corresponding nominal values are also presented. A comprehensive simulation study compares the different approaches, and the application of the different procedures to real data sets from four rivers in the United States and one in Spain complete the paper. PMID:26828651

  6. Concept of a (1-. cap alpha. ) performance confidence interval

    SciTech Connect

    Leong, H.H.; Johnson, G.R.; Bechtel, T.N.

    1980-01-01

    A multi-input, single-output system is assumed to be represented by some model. The distribution functions of the input and the output variables are considered to be at least obtainable through experimental data. Associated with the computer response of the model corresponding to given inputs, a conditional pseudoresponse set is generated. This response can be constructed by means of the model by using the simulated pseudorandom input variates from a neighborhood defined by a preassigned probability allowance. A pair of such pseudoresponse values can then be computed by a procedure corresponding to a (1-..cap alpha..) probability for the conditional pseudoresponse set. The range defined by such a pair is called a (1-..cap alpha..) performance confidence interval with respect to the model. The application of this concept can allow comparison of the merit of two models describing the same system, or it can detect a system change when the current response is out of the performance interval with respect to the previously identified model. 6 figures.

  7. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  8. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  9. Behavior Detection using Confidence Intervals of Hidden Markov Models

    SciTech Connect

    Griffin, Christopher H

    2009-01-01

    Markov models are commonly used to analyze real-world problems. Their combination of discrete states and stochastic transitions is suited to applications with deterministic and stochastic components. Hidden Markov Models (HMMs) are a class of Markov model commonly used in pattern recognition. Currently, HMMs recognize patterns using a maximum likelihood approach. One major drawback with this approach is that data observations are mapped to HMMs without considering the number of data samples available. Another problem is that this approach is only useful for choosing between HMMs. It does not provide a criteria for determining whether or not a given HMM adequately matches the data stream. In this work, we recognize complex behaviors using HMMs and confidence intervals. The certainty of a data match increases with the number of data samples considered. Receiver Operating Characteristic curves are used to find the optimal threshold for either accepting or rejecting a HMM description. We present one example using a family of HMM's to show the utility of the proposed approach. A second example using models extracted from a database of consumer purchases provides additional evidence that this approach can perform better than existing techniques.

  10. Bootstrap confidence intervals in multi-level simultaneous component analysis.

    PubMed

    Timmerman, Marieke E; Kiers, Henk A L; Smilde, Age K; Ceulemans, Eva; Stouten, Jeroen

    2009-05-01

    Multi-level simultaneous component analysis (MLSCA) was designed for the exploratory analysis of hierarchically ordered data. MLSCA specifies a component model for each level in the data, where appropriate constraints express possible similarities between groups of objects at a certain level, yielding four MLSCA variants. The present paper discusses different bootstrap strategies for estimating confidence intervals (CIs) on the individual parameters. In selecting a proper strategy, the main issues to address are the resampling scheme and the non-uniqueness of the parameters. The resampling scheme depends on which level(s) in the hierarchy are considered random, and which fixed. The degree of non-uniqueness depends on the MLSCA variant, and, in two variants, the extent to which the user exploits the transformational freedom. A comparative simulation study examines the quality of bootstrap CIs of different MLSCA parameters. Generally, the quality of bootstrap CIs appears to be good, provided the sample sizes are sufficient at each level that is considered to be random. The latter implies that if more than a single level is considered random, the total number of observations necessary to obtain reliable inferential information increases dramatically. An empirical example illustrates the use of bootstrap CIs in MLSCA. PMID:18086338

  11. Exact and Best Confidence Intervals for the Ability Parameter of the Rasch Model.

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph

    1991-01-01

    Smallest exact confidence intervals for the ability parameter of the Rasch model are derived and compared to the traditional asymptotically valid intervals based on Fisher information. Tables of exact confidence intervals, termed Clopper-Pearson intervals, can be drawn up with a computer program developed by K. Klauer. (SLD)

  12. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual" (2001)…

  13. Using Confidence Intervals and Recurrence Intervals to Determine Precipitation Delivery Mechanisms Responsible for Mass Wasting Events.

    NASA Astrophysics Data System (ADS)

    Ulizio, T. P.; Bilbrey, C.; Stoyanoff, N.; Dixon, J. L.

    2015-12-01

    Mass wasting events are geologic hazards that impact human life and property across a variety of landscapes. These movements can be triggered by tectonic activity, anomalous precipitation events, or both; acting to decrease the factor of safety ratio on a hillslope to the point of failure. There exists an active hazard landscape in the West Boulder River drainage of Park Co., MT in which the mechanisms of slope failure are unknown. It is known that region has not seen significant tectonic activity within the last decade, leaving anomalous precipitation events as the likely trigger for slope failures in the landscape. Precipitation can be delivered to a landscape via rainfall or snow; it was the aim of this study to determine the precipitation delivery mechanism most likely responsible for movements in the West Boulder drainage following the Jungle Wildfire of 2006. Data was compiled from four SNOTEL sites in the surrounding area, spanning 33 years, focusing on, but not limited to; maximum snow water equivalent (SWE) values in a water year, median SWE values on the date which maximum SWE was recorded in a water year, the total precipitation accumulated in a water year, etc. Means were computed and 99% confidence intervals were constructed around these means. Recurrence intervals and exceedance probabilities were computed for maximum SWE values and total precipitation accumulated in a water year to determine water years with anomalous precipitation. It was determined that the water year 2010-2011 received an anomalously high amount of SWE, and snow melt in the spring of this water year likely triggered recent mass waste movements. This data is further supported by Google Earth imagery, showing movements between 2009 and 2011. Return intervals for the maximum SWE value in 2010-11 for the Placer Basin SNOTEL site was 34 years, while return intervals for the Box Canyon and Monument Peak SNOTEL sites were 17.5 and 17 years respectively. Max SWE values lie outside the

  14. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

  15. Simultaneous confidence intervals for a steady-state leaky aquifer groundwater flow model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1996-01-01

    Using the optimization method of Vecchia & Cooley (1987), nonlinear Scheffe??-type confidence intervals were calculated tor the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear widths was not correct for the head intervals. Results show that nonlinear effects can cause the nonlinear intervals to be offset from, and either larger or smaller than, the linear approximations. Prior information on some transmissivities helps reduce and stabilize the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.

  16. Confidence Intervals about Score Reliability Coefficients, Please: An "EPM" Guidelines Editorial.

    ERIC Educational Resources Information Center

    Fan, Xitao; Thompson, Bruce

    2001-01-01

    Illustrates a number of ways in which confidence intervals for reliability coefficients can be estimated. Suggests that authors who submit articles to "Educational and Psychological Measurement" report confidence intervals for reliability estimates whenever they report score reliabilities and that they note the interval estimation methods used.…

  17. Fixed-Width Confidence Intervals in Linear Regression with Applications to the Johnson-Neyman Technique.

    ERIC Educational Resources Information Center

    Aitkin, Murray A.

    Fixed-width confidence intervals for a population regression line over a finite interval of x have recently been derived by Gafarian. The method is extended to provide fixed-width confidence intervals for the difference between two population regression lines, resulting in a simple procedure analogous to the Johnson-Neyman technique. (Author)

  18. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner. PMID:21451310

  19. A Comparison of Methods for Estimating Confidence Intervals for Omega-Squared Effect Size

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2012-01-01

    Effect size use has been increasing in the past decade in many research areas. Confidence intervals associated with effect sizes are encouraged to be reported. Prior work has investigated the performance of confidence interval estimation with Cohen's d. This study extends this line of work to the analysis of variance case with more than two…

  20. "Confidence Intervals for Gamma-family Measures of Ordinal Association": Correction

    ERIC Educational Resources Information Center

    Psychological Methods, 2008

    2008-01-01

    Reports an error in "Confidence intervals for gamma-family measures of ordinal association" by Carol M. Woods (Psychological Methods, 2007[Jun], Vol 12[2], 185-204). The note corrects simulation results presented in the article concerning the performance of confidence intervals (CIs) for Spearman's r-sub(s). An error in the author's C++ code…

  1. Using Asymptotic Results to Obtain a Confidence Interval for the Population Median

    ERIC Educational Resources Information Center

    Jamshidian, M.; Khatoonabadi, M.

    2007-01-01

    Almost all introductory and intermediate level statistics textbooks include the topic of confidence interval for the population mean. Almost all these texts introduce the median as a robust measure of central tendency. Only a few of these books, however, cover inference on the population median and in particular confidence interval for the median.…

  2. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  3. Confidence Intervals for the Mean: To Bootstrap or Not to Bootstrap

    ERIC Educational Resources Information Center

    Calzada, Maria E.; Gardner, Holly

    2011-01-01

    The results of a simulation conducted by a research team involving undergraduate and high school students indicate that when data is symmetric the student's "t" confidence interval for a mean is superior to the studied non-parametric bootstrap confidence intervals. When data is skewed and for sample sizes n greater than or equal to 10, the results…

  4. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students'…

  5. What Confidence Intervals "Really" Do and Why They Are So Important for Middle Grades Educational Research

    ERIC Educational Resources Information Center

    Skidmore, Susan Troncoso

    2009-01-01

    Recommendations made by major educational and psychological organizations (American Educational Research Association, 2006; American Psychological Association, 2001) call for researchers to regularly report confidence intervals. The purpose of the present paper is to provide support for the use of confidence intervals. To contextualize this…

  6. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  7. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  8. Multiplicative scale uncertainties in the unified approach for constructing confidence intervals

    SciTech Connect

    Smith, Elton

    2009-01-01

    We have investigated how uncertainties in the estimation of the detection efficiency affect the 90\\% confidence intervals in the unified approach for constructing confidence intervals. The study has been conducted for experiments where the number of detected events is large and can be described by a Gaussian probability density function. We also assume the detection efficiency has a Gaussian probability density and study the range of the relative uncertainties $\\sigma_\\epsilon$ between 0 and 30\\%. We find that the confidence intervals provide proper coverage and increase smoothly and continuously from the intervals that ignore scale uncertainties with a quadratic dependence on $\\sigma_\\epsilon$.

  9. Evaluation of confidence intervals for a steady-state leaky aquifer model

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    1999-01-01

    The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley [Vecchia, A.V. and Cooley, R.L., Water Resources Research, 1987, 23(7), 1237-1250] was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence intervals are compared to corresponding linear intervals. As suggested by the significant nonlinearity of the regression model, linear confidence intervals are often not accurate. The commonly made assumption that widths of linear confidence intervals always underestimate the actual (nonlinear) widths was not correct. Results show that nonlinear effects can cause the nonlinear intervals to be asymmetric and either larger or smaller than the linear approximations. Prior information on transmissivities helps reduce the size of the confidence intervals, with the most notable effects occurring for the parameters on which there is prior information and for head values in parameter zones for which there is prior information on the parameters.The fact that dependent variables of groundwater models are generally nonlinear functions of model parameters is shown to be a potentially significant factor in calculating accurate confidence intervals for both model parameters and functions of the parameters, such as the values of dependent variables calculated by the model. The Lagrangian method of Vecchia and Cooley was used to calculate nonlinear Scheffe-type confidence intervals for the parameters and the simulated heads of a steady-state groundwater flow model covering 450 km2 of a leaky aquifer. The nonlinear confidence

  10. Neutron multiplicity counting: Confidence intervals for reconstruction parameters

    DOE PAGESBeta

    Verbeke, Jerome M.

    2016-03-09

    From nuclear materials accountability to homeland security, the need for improved nuclear material detection, assay, and authentication has grown over the past decades. Starting in the 1940s, neutron multiplicity counting techniques have enabled quantitative evaluation of masses and multiplications of fissile materials. In this paper, we propose a new method to compute uncertainties on these parameters using a model-based sequential Bayesian processor, resulting in credible regions in the fissile material mass and multiplication space. These uncertainties will enable us to evaluate quantitatively proposed improvements to the theoretical fission chain model. Additionally, because the processor can calculate uncertainties in real time,more » it is a useful tool in applications such as portal monitoring: monitoring can stop as soon as a preset confidence of non-threat is reached.« less

  11. Population forecasts and confidence intervals for Sweden: a comparison of model-based and empirical approaches.

    PubMed

    Cohen, J E

    1986-02-01

    This paper compares several methods of generating confidence intervals for forecasts of population size. Two rest on a demographic model for age-structured populations with stochastic fluctuations in vital rates. Two rest on empirical analyses of past forecasts of population sizes of Sweden at five-year intervals from 1780 to 1980 inclusive. Confidence intervals produced by the different methods vary substantially. The relative sizes differ in the various historical periods. The narrowest intervals offer a lower bound on uncertainty about the future. Procedures for estimating a range of confidence intervals are tentatively recommended. A major lesson is that finitely many observations of the past and incomplete theoretical understanding of the present and future can justify at best a range of confidence intervals for population projections. Uncertainty attaches not only to the point forecasts of future population, but also to the estimates of those forecasts' uncertainty. PMID:3484356

  12. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  13. Confidence-interval construction for rate ratio in matched-pair studies with incomplete data.

    PubMed

    Li, Hui-Qiong; Chan, Ivan S F; Tang, Man-Lai; Tian, Guo-Liang; Tang, Nian-Sheng

    2014-01-01

    Matched-pair design is often used in clinical trials to increase the efficiency of establishing equivalence between two treatments with binary outcomes. In this article, we consider such a design based on rate ratio in the presence of incomplete data. The rate ratio is one of the most frequently used indices in comparing efficiency of two treatments in clinical trials. In this article, we propose 10 confidence-interval estimators for the rate ratio in incomplete matched-pair designs. A hybrid method that recovers variance estimates required for the rate ratio from the confidence limits for single proportions is proposed. It is noteworthy that confidence intervals based on this hybrid method have closed-form solution. The performance of the proposed confidence intervals is evaluated with respect to their exact coverage probability, expected confidence interval width, and distal and mesial noncoverage probability. The results show that the hybrid Agresti-Coull confidence interval based on Fieller's theorem performs satisfactorily for small to moderate sample sizes. Two real examples from clinical trials are used to illustrate the proposed confidence intervals. PMID:24697611

  14. Bayesian methods of confidence interval construction for the population attributable risk from cross-sectional studies.

    PubMed

    Pirikahu, Sarah; Jones, Geoffrey; Hazelton, Martin L; Heuer, Cord

    2016-08-15

    Population attributable risk measures the public health impact of the removal of a risk factor. To apply this concept to epidemiological data, the calculation of a confidence interval to quantify the uncertainty in the estimate is desirable. However, because perhaps of the confusion surrounding the attributable risk measures, there is no standard confidence interval or variance formula given in the literature. In this paper, we implement a fully Bayesian approach to confidence interval construction of the population attributable risk for cross-sectional studies. We show that, in comparison with a number of standard Frequentist methods for constructing confidence intervals (i.e. delta, jackknife and bootstrap methods), the Bayesian approach is superior in terms of percent coverage in all except a few cases. This paper also explores the effect of the chosen prior on the coverage and provides alternatives for particular situations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26799685

  15. Simple table for estimating confidence interval of discrepancy frequencies in microbiological safety evaluation.

    PubMed

    Lamy, Brigitte; Delignette-Muller, Marie Laure; Baty, Florent; Carret, Gerard

    2004-01-01

    We provide a simple tool to determine discrepancies confidence interval (CI) in microbiology validation studies such as technical accuracy of a qualitative test result. This tool enables to determine exact confidence interval (binomial CI) from an observed frequency when normal approximation is inadequate, that is, in case of rare events. This tool has daily applications in microbiology and we are presenting an example of its application to antimicrobial susceptibility systems evaluation. PMID:14706759

  16. Evaluation of Jackknife and Bootstrap for Defining Confidence Intervals for Pairwise Agreement Measures

    PubMed Central

    Severiano, Ana; Carriço, João A.; Robinson, D. Ashley; Ramirez, Mário; Pinto, Francisco R.

    2011-01-01

    Several research fields frequently deal with the analysis of diverse classification results of the same entities. This should imply an objective detection of overlaps and divergences between the formed clusters. The congruence between classifications can be quantified by clustering agreement measures, including pairwise agreement measures. Several measures have been proposed and the importance of obtaining confidence intervals for the point estimate in the comparison of these measures has been highlighted. A broad range of methods can be used for the estimation of confidence intervals. However, evidence is lacking about what are the appropriate methods for the calculation of confidence intervals for most clustering agreement measures. Here we evaluate the resampling techniques of bootstrap and jackknife for the calculation of the confidence intervals for clustering agreement measures. Contrary to what has been shown for some statistics, simulations showed that the jackknife performs better than the bootstrap at accurately estimating confidence intervals for pairwise agreement measures, especially when the agreement between partitions is low. The coverage of the jackknife confidence interval is robust to changes in cluster number and cluster size distribution. PMID:21611165

  17. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  18. Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys

    2016-04-01

    The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.

  19. Confidence intervals for the selected population in randomized trials that adapt the population enrolled

    PubMed Central

    Rosenblum, Michael

    2014-01-01

    It is a challenge to design randomized trials when it is suspected that a treatment may benefit only certain subsets of the target population. In such situations, trial designs have been proposed that modify the population enrolled based on an interim analysis, in a preplanned manner. For example, if there is early evidence during the trial that the treatment only benefits a certain subset of the population, enrollment may then be restricted to this subset. At the end of such a trial, it is desirable to draw inferences about the selected population. We focus on constructing confidence intervals for the average treatment effect in the selected population. Confidence interval methods that fail to account for the adaptive nature of the design may fail to have the desired coverage probability. We provide a new procedure for constructing confidence intervals having at least 95% coverage probability, uniformly over a large class Q of possible data generating distributions. Our method involves computing the minimum factor c by which a standard confidence interval must be expanded in order to have, asymptotically, at least 95% coverage probability, uniformly over Q. Computing the expansion factor c is not trivial, since it is not a priori clear, for a given decision rule, which data generating distribution leads to the worst-case coverage probability. We give an algorithm that computes c, and prove an optimality property for the resulting confidence interval procedure. PMID:23553577

  20. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  1. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  2. Effective confidence interval estimation of fault-detection process of software reliability growth models

    NASA Astrophysics Data System (ADS)

    Fang, Chih-Chiang; Yeh, Chun-Wu

    2016-09-01

    The quantitative evaluation of software reliability growth model is frequently accompanied by its confidence interval of fault detection. It provides helpful information to software developers and testers when undertaking software development and software quality control. However, the explanation of the variance estimation of software fault detection is not transparent in previous studies, and it influences the deduction of confidence interval about the mean value function that the current study addresses. Software engineers in such a case cannot evaluate the potential hazard based on the stochasticity of mean value function, and this might reduce the practicability of the estimation. Hence, stochastic differential equations are utilised for confidence interval estimation of the software fault-detection process. The proposed model is estimated and validated using real data-sets to show its flexibility.

  3. A self-normalized confidence interval for the mean of a class of nonstationary processes.

    PubMed

    Zhao, Zhibiao

    2011-01-01

    We construct an asymptotic confidence interval for the mean of a class of nonstationary processes with constant mean and time-varying variances. Due to the large number of unknown parameters, traditional approaches based on consistent estimation of the limiting variance of sample mean through moving block or non-overlapping block methods are not applicable. Under a block-wise asymptotically equal cumulative variance assumption, we propose a self-normalized confidence interval that is robust against the nonstationarity and dependence structure of the data. We also apply the same idea to construct an asymptotic confidence interval for the mean difference of nonstationary processes with piecewise constant means. The proposed methods are illustrated through simulations and an application to global temperature series. PMID:24319293

  4. A self-normalized confidence interval for the mean of a class of nonstationary processes

    PubMed Central

    ZHAO, ZHIBIAO

    2013-01-01

    Summary We construct an asymptotic confidence interval for the mean of a class of nonstationary processes with constant mean and time-varying variances. Due to the large number of unknown parameters, traditional approaches based on consistent estimation of the limiting variance of sample mean through moving block or non-overlapping block methods are not applicable. Under a block-wise asymptotically equal cumulative variance assumption, we propose a self-normalized confidence interval that is robust against the nonstationarity and dependence structure of the data. We also apply the same idea to construct an asymptotic confidence interval for the mean difference of nonstationary processes with piecewise constant means. The proposed methods are illustrated through simulations and an application to global temperature series. PMID:24319293

  5. Exact confidence intervals for channelized Hotelling observer performance in image quality studies.

    PubMed

    Wunderlich, Adam; Noo, Frederic; Gallas, Brandon D; Heilbrun, Marta E

    2015-02-01

    Task-based assessments of image quality constitute a rigorous, principled approach to the evaluation of imaging system performance. To conduct such assessments, it has been recognized that mathematical model observers are very useful, particularly for purposes of imaging system development and optimization. One type of model observer that has been widely applied in the medical imaging community is the channelized Hotelling observer (CHO), which is well-suited to known-location discrimination tasks. In the present work, we address the need for reliable confidence interval estimators of CHO performance. Specifically, we show that the bias associated with point estimates of CHO performance can be overcome by using confidence intervals proposed by Reiser for the Mahalanobis distance. In addition, we find that these intervals are well-defined with theoretically-exact coverage probabilities, which is a new result not proved by Reiser. The confidence intervals are tested with Monte Carlo simulation and demonstrated with two examples comparing X-ray CT reconstruction strategies. Moreover, commonly-used training/testing approaches are discussed and compared to the exact confidence intervals. MATLAB software implementing the estimators described in this work is publicly available at http://code.google.com/p/iqmodelo/. PMID:25265629

  6. Comparison of Approaches to Constructing Confidence Intervals for Mediating Effects Using Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. L.

    2007-01-01

    Mediators are variables that explain the association between an independent variable and a dependent variable. Structural equation modeling (SEM) is widely used to test models with mediating effects. This article illustrates how to construct confidence intervals (CIs) of the mediating effects for a variety of models in SEM. Specifically, mediating…

  7. Assessing Conformance with Benford's Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals.

    PubMed

    Lesperance, M; Reed, W J; Stephens, M A; Tsao, C; Wilton, B

    2016-01-01

    Benford's Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford's Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford's Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud(2), Pearson's chi-square statistic and 100(1 - α)% Goodman's simultaneous confidence intervals be computed when assessing conformance with Benford's Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size. PMID:27018999

  8. Sample Size Planning for the Standardized Mean Difference: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Rausch, Joseph R.

    2006-01-01

    Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no…

  9. Making Subjective Judgments in Quantitative Studies: The Importance of Using Effect Sizes and Confidence Intervals

    ERIC Educational Resources Information Center

    Callahan, Jamie L.; Reio, Thomas G., Jr.

    2006-01-01

    At least twenty-three journals in the social sciences purportedly require authors to report effect sizes and, to a much lesser extent, confidence intervals; yet these requirements are rarely clear in the information for contributors. This article reviews some of the literature criticizing the exclusive use of null hypothesis significance testing…

  10. Assessing Conformance with Benford’s Law: Goodness-Of-Fit Tests and Simultaneous Confidence Intervals

    PubMed Central

    Lesperance, M.; Reed, W. J.; Stephens, M. A.; Tsao, C.; Wilton, B.

    2016-01-01

    Benford’s Law is a probability distribution for the first significant digits of numbers, for example, the first significant digits of the numbers 871 and 0.22 are 8 and 2 respectively. The law is particularly remarkable because many types of data are considered to be consistent with Benford’s Law and scientists and investigators have applied it in diverse areas, for example, diagnostic tests for mathematical models in Biology, Genomics, Neuroscience, image analysis and fraud detection. In this article we present and compare statistically sound methods for assessing conformance of data with Benford’s Law, including discrete versions of Cramér-von Mises (CvM) statistical tests and simultaneous confidence intervals. We demonstrate that the common use of many binomial confidence intervals leads to rejection of Benford too often for truly Benford data. Based on our investigation, we recommend that the CvM statistic Ud2, Pearson’s chi-square statistic and 100(1 − α)% Goodman’s simultaneous confidence intervals be computed when assessing conformance with Benford’s Law. Visual inspection of the data with simultaneous confidence intervals is useful for understanding departures from Benford and the influence of sample size. PMID:27018999

  11. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  12. Characterizing the Mathematics Anxiety Literature Using Confidence Intervals as a Literature Review Mechanism

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Yetkiner, Z. Ebrar; Thompson, Bruce

    2010-01-01

    The authors report the contextualization of effect sizes within mathematics anxiety research, and more specifically within research using the Mathematics Anxiety Rating Scale (MARS) and the MARS for Adolescents (MARS-A). The effect sizes from 45 studies were characterized by graphing confidence intervals (CIs) across studies involving (a) adults…

  13. Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.

    2007-01-01

    The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…

  14. Confidence Intervals for an Effect Size Measure in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2007-01-01

    The increase in the squared multiple correlation coefficient ([Delta]R[squared]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. The coverage probability that an asymptotic and percentile bootstrap confidence interval includes [Delta][rho][squared] was investigated. As expected,…

  15. A Method for Obtaining Standard Errors and Confidence Intervals of Composite Reliability for Congeneric Items.

    ERIC Educational Resources Information Center

    Raykov, Tenko

    1998-01-01

    Proposes a method for obtaining standard errors and confidence intervals of composite reliability coefficients based on bootstrap methods and using a structural-equation-modeling framework for estimating the composite reliability of congeneric measures (T. Raykov, 1997). Demonstrates the approach with simulated data. (SLD)

  16. Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…

  17. Multivariate Effect Size Estimation: Confidence Interval Construction via Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling method is outlined for constructing a confidence interval (CI) of a popular multivariate effect size measure. The procedure uses the conventional multivariate analysis of variance (MANOVA) setup and is applicable with large samples. The approach provides a population range of plausible values for the proportion of…

  18. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    ERIC Educational Resources Information Center

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  19. Exact confidence intervals for the average causal effect on a binary outcome.

    PubMed

    Li, Xinran; Ding, Peng

    2016-03-15

    Based on the physical randomization of completely randomized experiments, in a recent article in Statistics in Medicine, Rigdon and Hudgens propose two approaches to obtaining exact confidence intervals for the average causal effect on a binary outcome. They construct the first confidence interval by combining, with the Bonferroni adjustment, the prediction sets for treatment effects among treatment and control groups, and the second one by inverting a series of randomization tests. With sample size n, their second approach requires performing O(n(4) )randomization tests. We demonstrate that the physical randomization also justifies other ways to constructing exact confidence intervals that are more computationally efficient. By exploiting recent advances in hypergeometric confidence intervals and the stochastic order information of randomization tests, we propose approaches that either do not need to invoke Monte Carlo or require performing at most O(n(2) )randomization tests. We provide technical details and R code in the Supporting Information. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26833798

  20. Confidence intervals for intraclass correlation coefficients in a nonlinear dose-response meta-analysis.

    PubMed

    Demetrashvili, Nino; Van den Heuvel, Edwin R

    2015-06-01

    This work is motivated by a meta-analysis case study on antipsychotic medications. The Michaelis-Menten curve is employed to model the nonlinear relationship between the dose and D2 receptor occupancy across multiple studies. An intraclass correlation coefficient (ICC) is used to quantify the heterogeneity across studies. To interpret the size of heterogeneity, an accurate estimate of ICC and its confidence interval is required. The goal is to apply a recently proposed generic beta-approach for construction the confidence intervals on ICCs for linear mixed effects models to nonlinear mixed effects models using four estimation methods. These estimation methods are the maximum likelihood, second-order generalized estimating equations and two two-step procedures. The beta-approach is compared with a large sample normal approximation (delta method) and bootstrapping. The confidence intervals based on the delta method and the nonparametric percentile bootstrap with various resampling strategies failed in our settings. The beta-approach demonstrates good coverages with both two-step estimation methods and consequently, it is recommended for the computation of confidence interval for ICCs in nonlinear mixed effects models for small studies. PMID:25703393

  1. ADEQUACY OF CONFIDENCE INTERVAL ESTIMATES OF YIELD RESPONSES TO OZONE ESTIMATED FROM NCLAN DATA

    EPA Science Inventory

    Three methods of estimating confidence intervals for the parameters of Weibull nonlinear models are examined. hese methods are based on linear approximation theory (Wald), the likelihood ratio test, and Clarke's (1987) procedures. nalyses are based on Weibull dose-response equati...

  2. A Note on Confidence Intervals for Two-Group Latent Mean Effect Size Measures

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Fan, Weihua; Hancock, Gregory R.

    2009-01-01

    This note suggests delta method implementations for deriving confidence intervals for a latent mean effect size measure for the case of 2 independent populations. A hypothetical kindergarten reading example using these implementations is provided, as is supporting LISREL syntax. (Contains 1 table.)

  3. Note on a Confidence Interval for the Squared Semipartial Correlation Coefficient

    ERIC Educational Resources Information Center

    Algina, James; Keselman, Harvey J.; Penfield, Randall J.

    2008-01-01

    A squared semipartial correlation coefficient ([Delta]R[superscript 2]) is the increase in the squared multiple correlation coefficient that occurs when a predictor is added to a multiple regression model. Prior research has shown that coverage probability for a confidence interval constructed by using a modified percentile bootstrap method with…

  4. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation

    PubMed Central

    Kisbu-Sakarya, Yasemin; MacKinnon, David P.; Miočević, Milica

    2014-01-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods. PMID:25554711

  5. The Direct Integral Method for Confidence Intervals for the Ratio of Two Location Parameters

    PubMed Central

    Wang, Yanqing; Wang, Suojin; Carroll, Raymond J.

    2015-01-01

    Summary In a relative risk analysis of colorectal caner on nutrition intake scores across genders, we show that, surprisingly, when comparing the relative risks for men and women based on the index of a weighted sum of various nutrition scores, the problem reduces to forming a confidence interval for the ratio of two (asymptotically) normal random variables. The latter is an old problem, with a substantial literature. However, our simulation results suggest that existing methods often either give inaccurate coverage probabilities or have a positive probability to produce confidence intervals with infinite length. Motivated by such a problem, we develop a new methodology which we call the Direct Integral Method for Ratios (DIMER), which, unlike the other methods, is based directly on the distribution of the ratio. In simulations, we compare this method to many others. These simulations show that, generally, DIMER more closely achieves the nominal confidence level, and in those cases that the other methods achieve the nominal levels, DIMER has comparable confidence interval lengths. The methodology is then applied to a real data set, and with follow up simulations. PMID:25939421

  6. Finite sample pointwise confidence intervals for a survival distribution with right-censored data.

    PubMed

    Fay, Michael P; Brittain, Erica H

    2016-07-20

    We review and develop pointwise confidence intervals for a survival distribution with right-censored data for small samples, assuming only independence of censoring and survival. When there is no censoring, at each fixed time point, the problem reduces to making inferences about a binomial parameter. In this case, the recently developed beta product confidence procedure (BPCP) gives the standard exact central binomial confidence intervals of Clopper and Pearson. Additionally, the BPCP has been shown to be exact (gives guaranteed coverage at the nominal level) for progressive type II censoring and has been shown by simulation to be exact for general independent right censoring. In this paper, we modify the BPCP to create a 'mid-p' version, which reduces to the mid-p confidence interval for a binomial parameter when there is no censoring. We perform extensive simulations on both the standard and mid-p BPCP using a method of moments implementation that enforces monotonicity over time. All simulated scenarios suggest that the standard BPCP is exact. The mid-p BPCP, like other mid-p confidence intervals, has simulated coverage closer to the nominal level but may not be exact for all survival times, especially in very low censoring scenarios. In contrast, the two asymptotically-based approximations have lower than nominal coverage in many scenarios. This poor coverage is due to the extreme inflation of the lower error rates, although the upper limits are very conservative. Both the standard and the mid-p BPCP methods are available in our bpcp R package. Published 2016. This article is US Government work and is in the public domain in the USA. PMID:26891706

  7. The use of latin hypercube sampling for the efficient estimation of confidence intervals

    SciTech Connect

    Grabaskas, D.; Denning, R.; Aldemir, T.; Nakayama, M. K.

    2012-07-01

    Latin hypercube sampling (LHS) has long been used as a way of assuring adequate sampling of the tails of distributions in a Monte Carlo analysis and provided the framework for the uncertainty analysis performed in the NUREG-1150 risk assessment. However, this technique has not often been used in the performance of regulatory analyses due to the inability to establish confidence levels on the quantiles of the output distribution. Recent work has demonstrated a method that makes this possible. This method is compared to the procedure of crude Monte Carlo using order statistics, which is currently used to establish confidence levels. The results of several statistical examples demonstrate that the LHS confidence interval method can provide a more accurate and precise solution, but issues remain when applying the technique generally. (authors)

  8. Pointwise confidence intervals for a survival distribution with small samples or heavy censoring.

    PubMed

    Fay, Michael P; Brittain, Erica H; Proschan, Michael A

    2013-09-01

    We propose a beta product confidence procedure (BPCP) that is a non-parametric confidence procedure for the survival curve at a fixed time for right-censored data assuming independent censoring. In such situations, the Kaplan-Meier estimator is typically used with an asymptotic confidence interval (CI) that can have coverage problems when the number of observed failures is not large, and/or when testing the latter parts of the curve where there are few remaining subjects at risk. The BPCP guarantees central coverage (i.e. ensures that both one-sided error rates are no more than half of the total nominal rate) when there is no censoring (in which case it reduces to the Clopper-Pearson interval) or when there is progressive type II censoring (i.e. when censoring only occurs immediately after failures on fixed proportions of the remaining individuals). For general independent censoring, simulations show that the BPCP maintains central coverage in many situations where competing methods can have very substantial error rate inflation for the lower limit. The BPCP gives asymptotically correct coverage and is asymptotically equivalent to the CI on the Kaplan-Meier estimator using Greenwood's variance. The BPCP may be inverted to create confidence procedures for a quantile of the underlying survival distribution. Because the BPCP is easy to implement, offers protection in settings when other methods fail, and essentially matches other methods when they succeed, it should be the method of choice. PMID:23632624

  9. Estimation of confidence intervals of global horizontal irradiance obtained from a weather prediction model

    NASA Astrophysics Data System (ADS)

    Ohtake, Hideaki; Gari da Silva Fonseca, Joao, Jr.; Takashima, Takumi; Oozeki, Takashi; Yamada, Yoshinori

    2014-05-01

    Many photovoltaic (PV) systems have been installed in Japan after the introduction of the Feed-in-Tariff. For an energy management of electric power systems included many PV systems, the forecast of the PV power production are useful technology. Recently numerical weather predictions have been applied to forecast the PV power production while the forecasted values invariably have forecast errors for each modeling system. So, we must use the forecast data considering its error. In this study, we attempted to estimate confidence intervals for hourly forecasts of global horizontal irradiance (GHI) values obtained from a mesoscale model (MSM) de-veloped by the Japan Meteorological Agency. In the recent study, we found that the forecasted values of the GHI of the MSM have two systematical forecast errors; the first is that forecast values of the GHI are depended on the clearness indices, which are defined as the GHI values divided by the extraterrestrial solar irradiance. The second is that forecast errors have the seasonal variations; the overestimation of the GHI forecasts is found in winter while the underestimation of those is found in summer. The information of the errors of the hourly GHI forecasts, that is, confidence intervals of the forecasts, is of great significance for planning the energy management included a lot of PV systems by an electric company. On the PV systems, confidence intervals of the GHI forecasts are required for a pinpoint area or for a relatively large area control-ling the power system. For the relatively large area, a spatial-smoothing method of the GHI values is performed for both the observations and forecasts. The spatial-smoothing method caused the decline of confidence intervals of the hourly GHI forecasts on an extreme event of the GHI forecast (a case of large forecast error) over the relatively large area of the Tokyo electric company (approximately 68 % than for a pinpoint forecast). For more credible estimation of the confidence

  10. Accuracy in Parameter Estimation for Targeted Effects in Structural Equation Modeling: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Lai, Keke; Kelley, Ken

    2011-01-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about…

  11. Accuracy in Parameter Estimation for the Root Mean Square Error of Approximation: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken; Lai, Keke

    2011-01-01

    The root mean square error of approximation (RMSEA) is one of the most widely reported measures of misfit/fit in applications of structural equation modeling. When the RMSEA is of interest, so too should be the accompanying confidence interval. A narrow confidence interval reveals that the plausible parameter values are confined to a relatively…

  12. Students' Conceptual Metaphors Influence Their Statistical Reasoning about Confidence Intervals. WCER Working Paper No. 2008-5

    ERIC Educational Resources Information Center

    Grant, Timothy S.; Nathan, Mitchell J.

    2008-01-01

    Confidence intervals are beginning to play an increasing role in the reporting of research findings within the social and behavioral sciences and, consequently, are becoming more prevalent in beginning classes in statistics and research methods. Confidence intervals are an attractive means of conveying experimental results, as they contain a…

  13. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2007-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  14. A Comparison of Various Stress Rupture Life Models for Orbiter Composite Pressure Vessels and Confidence Intervals

    NASA Technical Reports Server (NTRS)

    Grimes-Ledesma, Lorie; Murthy, Pappu, L. N.; Phoenix, S. Leigh; Glaser, Ronald

    2006-01-01

    In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Ovenvrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter error are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.

  15. Amplitude estimation of a sine function based on confidence intervals and Bayes' theorem

    NASA Astrophysics Data System (ADS)

    Eversmann, D.; Pretz, J.; Rosenthal, M.

    2016-05-01

    This paper discusses the amplitude estimation using data originating from a sine-like function as probability density function. If a simple least squares fit is used, a significant bias is observed if the amplitude is small compared to its error. It is shown that a proper treatment using the Feldman-Cousins algorithm of likelihood ratios allows one to construct improved confidence intervals. Using Bayes' theorem a probability density function is derived for the amplitude. It is used in an application to show that it leads to better estimates compared to a simple least squares fit.

  16. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman

  17. [Abnormally broad confidence intervals in logistic regression: interpretation of results of statistical programs].

    PubMed

    de Irala, J; Fernandez-Crehuet Navajas, R; Serrano del Castillo, A

    1997-03-01

    This study describes the behavior of eight statistical programs (BMDP, EGRET, JMP, SAS, SPSS, STATA, STATISTIX, and SYSTAT) when performing a logistic regression with a simulated data set that contains a numerical problem created by the presence of a cell value equal to zero. The programs respond in different ways to this problem. Most of them give a warning, although many simultaneously present incorrect results, among which are confidence intervals that tend toward infinity. Such results can mislead the user. Various guidelines are offered for detecting these problems in actual analyses, and users are reminded of the importance of critical interpretation of the results of statistical programs. PMID:9162592

  18. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  19. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.

    PubMed

    Greenland, Sander; Senn, Stephen J; Rothman, Kenneth J; Carlin, John B; Poole, Charles; Goodman, Steven N; Altman, Douglas G

    2016-04-01

    Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting. PMID:27209009

  20. Confidence intervals for demographic projections based on products of random matrices.

    PubMed

    Heyde, C C; Cohen, J E

    1985-04-01

    This work is concerned with the growth of age-structured populations whose vital rates vary stochastically in time and with the provision of confidence intervals. In this paper a model Yt + 1(omega) = Xt + 1(omega) Yt(omega) is considered, where Yt is the (column) vector of the numbers of individuals in each age class at time t, X is a matrix of vital rates, and omega refers to a particular realization of the process that produces the vital rates. It is assumed that (Xi) is a stationary sequence of random matrices with nonnegative elements and that there is an integer n0 such that any product Xj + n0...Xj + 1Xj has all its elements positive with probability one. Then, under mild additional conditions, strong laws of large numbers and central limit results are obtained for the logarithms of the components of Yt. Large-sample estimators of the parameters in these limit results are derived. From these, confidence intervals on population growth and growth rates can be constructed. Various finite-sample estimators are studied numerically. The estimators are then used to study the growth of the striped bass population breeding in the Potomac River of the eastern United States. PMID:4023951

  1. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  2. Another look at confidence intervals: Proposal for a more relevant and transparent approach

    NASA Astrophysics Data System (ADS)

    Biller, Steven D.; Oser, Scott M.

    2015-02-01

    The behaviors of various confidence/credible interval constructions are explored, particularly in the region of low event numbers where methods diverge most. We highlight a number of challenges, such as the treatment of nuisance parameters, and common misconceptions associated with such constructions. An informal survey of the literature suggests that confidence intervals are not always defined in relevant ways and are too often misinterpreted and/or misapplied. This can lead to seemingly paradoxical behaviors and flawed comparisons regarding the relevance of experimental results. We therefore conclude that there is a need for a more pragmatic strategy which recognizes that, while it is critical to objectively convey the information content of the data, there is also a strong desire to derive bounds on model parameter values and a natural instinct to interpret things this way. Accordingly, we attempt to put aside philosophical biases in favor of a practical view to propose a more transparent and self-consistent approach that better addresses these issues.

  3. Statistical variability and confidence intervals for planar dose QA pass rates

    SciTech Connect

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B.

    2011-11-15

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  4. Safety evaluation and confidence intervals when the number of observed events is small or zero.

    PubMed

    Jovanovic, B D; Zalenski, R J

    1997-09-01

    A common objective in many clinical studies is to determine the safety of a diagnostic test or therapeutic intervention. In these evaluations, serious adverse effects are either rare or not encountered. In this setting, the estimation of the confidence interval (CI) for the unknown proportion of adverse events has special importance. When no adverse events are encountered, commonly used approximate methods for calculating CIs cannot be applied, and such information is not commonly reported. Furthermore, when only a few adverse events are encountered, the approximate methods for calculation of CIs can be applied, but are neither appropriate nor accurate. In both situations, CIs should be computed with the use of the exact binomial distribution. We discuss the need for such estimation and provide correct methods and rules of thumb for quick computations of accurate approximations of the 95% and 99.9% CIs when the observed number of adverse events is zero. PMID:9287891

  5. Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests.

    PubMed

    López-Ratón, Mónica; Cadarso-Suárez, Carmen; Molanes-López, Elisa M; Letón, Emilio

    2016-01-01

    Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North-West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quantity and the other on empirical likelihood. We perform a simulation study to check the practical behaviour of both methods and illustrate their use by means of three real biomedical datasets on melanoma, prostate cancer and coronary artery disease. PMID:26756550

  6. BootES: an R package for bootstrap confidence intervals on effect sizes.

    PubMed

    Kirby, Kris N; Gerlanc, Daniel

    2013-12-01

    Bootstrap Effect Sizes (bootES; Gerlanc & Kirby, 2012) is a free, open-source software package for R (R Development Core Team, 2012), which is a language and environment for statistical computing. BootES computes both unstandardized and standardized effect sizes (such as Cohen's d, Hedges's g, and Pearson's r) and makes easily available for the first time the computation of their bootstrap confidence intervals (CIs). In this article, we illustrate how to use bootES to find effect sizes for contrasts in between-subjects, within-subjects, and mixed factorial designs and to find bootstrap CIs for correlations and differences between correlations. An appendix gives a brief introduction to R that will allow readers to use bootES without having prior knowledge of R. PMID:23519455

  7. Maximum likelihood algorithm using an efficient scheme for computing sensitivities and parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.; Klein, V.

    1984-01-01

    Improved techniques for estimating airplane stability and control derivatives and their standard errors are presented. A maximum likelihood estimation algorithm is developed which relies on an optimization scheme referred to as a modified Newton-Raphson scheme with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared to integrating the analytically-determined sensitivity equations or using a finite difference scheme. An aircraft estimation problem is solved using real flight data to compare MNRES with the commonly used modified Newton-Raphson technique; MNRES is found to be faster and more generally applicable. Parameter standard errors are determined using a random search technique. The confidence intervals obtained are compared with Cramer-Rao lower bounds at the same confidence level. It is observed that the nonlinearity of the cost function is an important factor in the relationship between Cramer-Rao bounds and the error bounds determined by the search technique.

  8. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  9. Confidence interval in estimating solute loads from a small forested catchment

    NASA Astrophysics Data System (ADS)

    Tada, A.; Tanakamaru, H.

    2007-12-01

    The evaluation of uncertainty in estimating mass flux (load) from catchments plays the important role in the evaluation of chemical weathering, TMDLs implementation, and so on. Loads from catchments are estimated with many methods such as weighted average, rating curve, regression model, ratio estimator, and composite method, considering the appropriate sampling strategy. Total solute loads for 10 months from a small forested catchment were calculated based on the high-temporal resolution data and used in evaluating the validity of 95% confidence intervals (CIs) of estimated loads. The effect of employing random and flow-stratified sampling methods on 95% CIs was also evaluated. Water quality data of the small forested catchment (12.8 ha) in Japan was collected every 15 minutes during 10 months in 2004 to acquire the 'true values' of solute loads. Those data were measured by the monitoring equipment using FIP (flow injection potentiometry) method with ion-selective electrodes. Measured indices were sodium, potassium, and chloride ion in the stream water. Water quantity (discharge rate) data were measured continuously by the V-notch weir at the catchment outlet. The Beale ratio estimator was employed as the estimation method of solute loads because it was known as unbiased estimator. The bootstrap method was also used for calculating the 95% confidence intervals of solute loads with 2,000 bootstrap replications. Both flow-stratified and random sampling was adopted as sampling strategy which extracted sample data sets from the entire observations. Discharge rate seemed to be a dominant factor of solute concentration because the catchment was almost undisturbed. The validity of 95% CIs were evaluated using the number of inclusion of 'true value' inside CIs out of 1,000 estimations derived from independently and iteratively extracted sample data sets. The number of samples in each data set was set to 5,500, 950, 470, 230, 40, and 20, equivalent to hourly, 6-hourly, 12

  10. Confidence intervals after multiple imputation: combining profile likelihood information from logistic regressions.

    PubMed

    Heinze, Georg; Ploner, Meinhard; Beyea, Jan

    2013-12-20

    In the logistic regression analysis of a small-sized, case-control study on Alzheimer's disease, some of the risk factors exhibited missing values, motivating the use of multiple imputation. Usually, Rubin's rules (RR) for combining point estimates and variances would then be used to estimate (symmetric) confidence intervals (CIs), on the assumption that the regression coefficients were distributed normally. Yet, rarely is this assumption tested, with or without transformation. In analyses of small, sparse, or nearly separated data sets, such symmetric CI may not be reliable. Thus, RR alternatives have been considered, for example, Bayesian sampling methods, but not yet those that combine profile likelihoods, particularly penalized profile likelihoods, which can remove first order biases and guarantee convergence of parameter estimation. To fill the gap, we consider the combination of penalized likelihood profiles (CLIP) by expressing them as posterior cumulative distribution functions (CDFs) obtained via a chi-squared approximation to the penalized likelihood ratio statistic. CDFs from multiple imputations can then easily be averaged into a combined CDF c , allowing confidence limits for a parameter β  at level 1 - α to be identified as those β* and β** that satisfy CDF c (β*) = α ∕ 2 and CDF c (β**) = 1 - α ∕ 2. We demonstrate that the CLIP method outperforms RR in analyzing both simulated data and data from our motivating example. CLIP can also be useful as a confirmatory tool, should it show that the simpler RR are adequate for extended analysis. We also compare the performance of CLIP to Bayesian sampling methods using Markov chain Monte Carlo. CLIP is available in the R package logistf. PMID:23873477

  11. A Confidence Interval for the Wallace Coefficient of Concordance and Its Application to Microbial Typing Methods

    PubMed Central

    Pinto, Francisco R.; Melo-Cristino, José; Ramirez, Mário

    2008-01-01

    Very diverse research fields frequently deal with the analysis of multiple clustering results, which should imply an objective detection of overlaps and divergences between the formed groupings. The congruence between these multiple results can be quantified by clustering comparison measures such as the Wallace coefficient (W). Since the measured congruence is dependent on the particular sample taken from the population, there is variability in the estimated values relatively to those of the true population. In the present work we propose the use of a confidence interval (CI) to account for this variability when W is used. The CI analytical formula is derived assuming a Gaussian sampling distribution and recurring to the algebraic relationship between W and the Simpson's index of diversity. This relationship also allows the estimation of the expected Wallace value under the assumption of independence of classifications. We evaluated the CI performance using simulated and published microbial typing data sets. The simulations showed that the CI has the desired 95% coverage when the W is greater than 0.5. This behaviour is robust to changes in cluster number, cluster size distributions and sample size. The analysis of the published data sets demonstrated the usefulness of the new CI by objectively validating some of the previous interpretations, while showing that other conclusions lacked statistical support. PMID:19002246

  12. Performance analysis of complex repairable industrial systems using PSO and fuzzy confidence interval based methodology.

    PubMed

    Garg, Harish

    2013-03-01

    The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. PMID:23098922

  13. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  14. A method for establishing absolute full-energy peak efficiency and its confidence interval for HPGe detectors

    NASA Astrophysics Data System (ADS)

    Rizwan, U.; Chester, A.; Domingo, T.; Starosta, K.; Williams, J.; Voss, P.

    2015-12-01

    A method is proposed for establishing the absolute efficiency calibration of a HPGe detector including the confidence interval in the energy range of 79.6-3451.2 keV. The calibrations were accomplished with the 133Ba, 60Co, 56Co and 152Eu point-like radioactive sources with only the 60Co source being activity calibrated to an accuracy of 2% at the 90% confidence level. All data sets measured from activity calibrated and uncalibrated sources were fit simultaneously using the linearized least squares method. The proposed fit function accounts for scaling of the data taken with activity uncalibrated sources to the data taken with the high accuracy activity calibrated source. The confidence interval for the fit was found analytically using the covariance matrix. Accuracy of the fit was below 3.5% at the 90% confidence level in the 79.6-3451.2 keV energy range.

  15. Constructing bootstrap confidence intervals for principal component loadings in the presence of missing data: a multiple-imputation approach.

    PubMed

    van Ginkel, Joost R; Kiers, Henk A L

    2011-11-01

    Earlier research has shown that bootstrap confidence intervals from principal component loadings give a good coverage of the population loadings. However, this only applies to complete data. When data are incomplete, missing data have to be handled before analysing the data. Multiple imputation may be used for this purpose. The question is how bootstrap confidence intervals for principal component loadings should be corrected for multiply imputed data. In this paper, several solutions are proposed. Simulations show that the proposed corrections for multiply imputed data give a good coverage of the population loadings in various situations. PMID:21973098

  16. A comparison study of modal parameter confidence intervals computed using the Monte Carlo and Bootstrap techniques

    SciTech Connect

    Doebling, S.W.; Farrar, C.R.; Cornwell, P.J.

    1998-02-01

    This paper presents a comparison of two techniques used to estimate the statistical confidence intervals on modal parameters identified from measured vibration data. The first technique is Monte Carlo simulation, which involves the repeated simulation of random data sets based on the statistics of the measured data and an assumed distribution of the variability in the measured data. A standard modal identification procedure is repeatedly applied to the randomly perturbed data sets to form a statistical distribution on the identified modal parameters. The second technique is the Bootstrap approach, where individual Frequency Response Function (FRF) measurements are randomly selected with replacement to form an ensemble average. This procedure, in effect, randomly weights the various FRF measurements. These weighted averages of the FRFs are then put through the modal identification procedure. The modal parameters identified from each randomly weighted data set are then used to define a statistical distribution for these parameters. The basic difference in the two techniques is that the Monte Carlo technique requires the assumption on the form of the distribution of the variability in the measured data, while the bootstrap technique does not. Also, the Monte Carlo technique can only estimate random errors, while the bootstrap statistics represent both random and bias (systematic) variability such as that arising from changing environmental conditions. However, the bootstrap technique requires that every frequency response function be saved for each average during the data acquisition process. Neither method can account for bias introduced during the estimation of the FRFs. This study has been motivated by a program to develop vibration-based damage identification procedures.

  17. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  18. Sample Size Planning for the Squared Multiple Correlation Coefficient: Accuracy in Parameter Estimation via Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Kelley, Ken

    2008-01-01

    Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…

  19. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  20. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  1. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    ERIC Educational Resources Information Center

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  2. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  3. Effect size, confidence interval and statistical significance: a practical guide for biologists.

    PubMed

    Nakagawa, Shinichi; Cuthill, Innes C

    2007-11-01

    Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non

  4. CONFIDENCE INTERVALS FOR A CROP YIELD LOSS FUNCTION IN NONLINEAR REGRESSION

    EPA Science Inventory

    Quantifying the relationship between chronic pollutant exposure and the ensuing biological response requires consideration of nonlinear functions that are flexible enough to generate a wide range of response curves. he linear approximation (i.e., Wald's) interval estimates for oz...

  5. A numerical approach to 14C wiggle-match dating of organic deposits: best fits and confidence intervals

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Heuvelink, Gerard B. M.; Mauquoy, Dmitri; van der Plicht, Johannes; van Geel, Bas

    2003-06-01

    14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).

  6. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    PubMed

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. PMID:21764476

  7. Confidence Intervals for Squared Semipartial Correlation Coefficients: The Effect of Nonnormality

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2010-01-01

    The increase in the squared multiple correlation coefficient ([delta]R[superscript 2]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. Algina, Keselman, and Penfield found that intervals based on asymptotic principles were typically very inaccurate, even though the sample size…

  8. Replication, "p[subscript rep]," and Confidence Intervals: Comment Prompted by Iverson, Wagenmakers, and Lee (2010); Lecoutre, Lecoutre, and Poitevineau (2010); and Maraun and Gabriel (2010)

    ERIC Educational Resources Information Center

    Cumming, Geoff

    2010-01-01

    This comment offers three descriptions of "p[subscript rep]" that start with a frequentist account of confidence intervals, draw on R. A. Fisher's fiducial argument, and do not make Bayesian assumptions. Links are described among "p[subscript rep]," "p" values, and the probability a confidence interval will capture the mean of a replication…

  9. Statistical damage detection method for frame structures using a confidence interval

    NASA Astrophysics Data System (ADS)

    Li, Weiming; Zhu, Hongping; Luo, Hanbin; Xia, Yong

    2010-03-01

    A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.

  10. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-01-01

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision. PMID:23913390

  11. Solar PV power generation forecasting using hybrid intelligent algorithms and uncertainty quantification based on bootstrap confidence intervals

    NASA Astrophysics Data System (ADS)

    AlHakeem, Donna Ibrahim

    This thesis focuses on short-term photovoltaic forecasting (STPVF) for the power generation of a solar PV system using probabilistic forecasts and deterministic forecasts. Uncertainty estimation, in the form of a probabilistic forecast, is emphasized in this thesis to quantify the uncertainties of the deterministic forecasts. Two hybrid intelligent models are proposed in two separate chapters to perform the STPVF. In Chapter 4, the framework of the deterministic proposed hybrid intelligent model is presented, which is a combination of wavelet transform (WT) that is a data filtering technique and a soft computing model (SCM) that is generalized regression neural network (GRNN). Additionally, this chapter proposes a model that is combined as WT+GRNN and is utilized to conduct the forecast of two random days in each season for 1-hour-ahead to find the power generation. The forecasts are analyzed utilizing accuracy measures equations to determine the model performance and compared with another SCM. In Chapter 5, the framework of the proposed model is presented, which is a combination of WT, a SCM based on radial basis function neural network (RBFNN), and a population-based stochastic particle swarm optimization (PSO). Chapter 5 proposes a model combined as a deterministic approach that is represented as WT+RBFNN+PSO, and then a probabilistic forecast is conducted utilizing bootstrap confidence intervals to quantify uncertainty from the output of WT+RBFNN+PSO. In Chapter 5, the forecasts are conducted by furthering the tests done in Chapter 4. Chapter 5 forecasts the power generation of two random days in each season for 1-hour-ahead, 3-hour-ahead, and 6-hour-ahead. Additionally, different types of days were also forecasted in each season such as a sunny day (SD), cloudy day (CD), and a rainy day (RD). These forecasts were further analyzed using accuracy measures equations, variance and uncertainty estimation. The literature that is provided supports that the proposed

  12. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014).

    PubMed

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

  13. Temperature dependence of the rate and activation parameters for tert-butyl chloride solvolysis: Monte Carlo simulation of confidence intervals

    NASA Astrophysics Data System (ADS)

    Sung, Dae Dong; Kim, Jong-Youl; Lee, Ikchoon; Chung, Sung Sik; Park, Kwon Ha

    2004-07-01

    The solvolysis rate constants ( kobs) of tert-butyl chloride are measured in 20%(v/v) 2-PrOH-H 2O mixture at 15 temperatures ranging from 0 to 39 °C. Examination of the temperature dependence of the rate constants by the weighted least squares fitting to two to four terms equations has led to the three-term form, ln kobs= a1+ a2T-1+ a3ln T, as the best expression. The activation parameters, ΔH ‡ and ΔS ‡, calculated by using three constants a1, a2 and a3 revealed the steady decrease of ≈1 kJ mol -1 per degree and 3.5 J K -1 mol -1 per degree, respectively, as the temperature rises. The sign change of ΔS ‡ at ≈20.0 °C and the large negative heat capacity of activation, ΔC p‡=-1020 J K -1 mol -1, derived are interpreted to indicate an S N1 mechanism and a net change from water structure breaking to electrostrictive solvation due to the partially ionic transition state. Confidence intervals estimated by the Monte Carlo method are far more precise than those by the conventional method.

  14. The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014)

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2016-01-01

    Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157–1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed. PMID:27458424

  15. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    PubMed Central

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  16. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

    NASA Astrophysics Data System (ADS)

    Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

    2011-10-01

    Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

  17. Effect of Minimum Cell Sizes and Confidence Interval Sizes for Special Education Subgroups on School-Level AYP Determinations. Synthesis Report 61

    ERIC Educational Resources Information Center

    Simpson, Mary Ann; Gong, Brian; Marion, Scott

    2006-01-01

    This study addresses three questions: First, considering the full group of students and the special education subgroup, what is the likely effect of minimum cell size and confidence interval size on school-level Adequate Yearly Progress (AYP) determinations? Second, what effects do the changing minimum cell sizes have on inclusion of special…

  18. Confidence Interval Methods for Coefficient Alpha on the Basis of Discrete, Ordinal Response Items: Which One, If Any, Is the Best?

    ERIC Educational Resources Information Center

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Owens, Corina M.; Scott, Heather M.

    2011-01-01

    In this study, the authors aimed to examine 8 of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions wherein the underlying item…

  19. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  20. Confidence interval estimation for an empirical model quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this work, we address uncertainty analysis for a model, presented in a companion paper, quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance. To achieve this we present several methods for confidence interval estimation. Estimation ...

  1. Corn stover semi-mechanistic enzymatic hydrolysis model with tight parameter confidence intervals for model-based process design and optimization.

    PubMed

    Scott, Felipe; Li, Muyang; Williams, Daniel L; Conejeros, Raúl; Hodge, David B; Aroca, Germán

    2015-02-01

    Uncertainty associated to the estimated values of the parameters in a model is a key piece of information for decision makers and model users. However, this information is typically not reported or the confidence intervals are too large to be useful. A semi-mechanistic model for the enzymatic saccharification of dilute acid pretreated corn stover is proposed in this work, the model is a modification of an existing one providing a statistically significant improved fit towards a set of experimental data that includes varying initial solid loadings (10-25% w/w) and the use of the pretreatment liquor and washed solids with or without supplementation of key inhibitors. A subset of 8 out of 17 parameters was identified, showing sufficiently tight confidence intervals to be used in uncertainty propagation and model analysis, without requiring interval truncation via expert judgment. PMID:25496946

  2. CONFIDENCE INTERVALS AND CURVATURE MEASURES IN NONLINEAR REGRESSION USING THE IML AND NLIN PROCEDURES IN SAS SOFTWARE

    EPA Science Inventory

    Interval estimates for nonlinear parameters using the linear approximation are sensitive to parameter curvature effects. he adequacy of the linear approximation (Wald) interval is determined using the nonlinearity measures of Bates and Watts (1980), and Clarke (1987b), and the pr...

  3. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

    ERIC Educational Resources Information Center

    Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-01-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

  4. Evaluating the Impact of Guessing and Its Interactions with Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    ERIC Educational Resources Information Center

    Paek, Insu

    2016-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…

  5. Using a Nonparametric Bootstrap to Obtain a Confidence Interval for Pearson's "r" with Cluster Randomized Data: A Case Study

    ERIC Educational Resources Information Center

    Wagstaff, David A.; Elek, Elvira; Kulis, Stephen; Marsiglia, Flavio

    2009-01-01

    A nonparametric bootstrap was used to obtain an interval estimate of Pearson's "r," and test the null hypothesis that there was no association between 5th grade students' positive substance use expectancies and their intentions to not use substances. The students were participating in a substance use prevention program in which the unit of…

  6. On the Proper Estimation of the Confidence Interval for the Design Formula of Blast-Induced Vibrations with Site Records

    NASA Astrophysics Data System (ADS)

    Yan, W. M.; Yuen, Ka-Veng

    2015-01-01

    Blast-induced ground vibration has received much engineering and public attention. The vibration is often represented by the peak particle velocity (PPV) and the empirical approach is employed to describe the relationship between the PPV and the scaled distance. Different statistical methods are often used to obtain the confidence level of the prediction. With a known scaled distance, the amount of explosives in a planned blast can then be determined by a blast engineer when the PPV limit and the confidence level of the vibration magnitude are specified. This paper shows that these current approaches do not incorporate the posterior uncertainty of the fitting coefficients. In order to resolve this problem, a Bayesian method is proposed to derive the site-specific fitting coefficients based on a small amount of data collected at an early stage of a blasting project. More importantly, uncertainty of both the fitting coefficients and the design formula can be quantified. Data collected from a site formation project in Hong Kong is used to illustrate the performance of the proposed method. It is shown that the proposed method resolves the underestimation problem in one of the conventional approaches. The proposed approach can be easily conducted using spreadsheet calculation without the need for any additional tools, so it will be particularly welcome by practicing engineers.

  7. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  8. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    PubMed

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. PMID:26003712

  9. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample. PMID:12220133

  10. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon.

    PubMed

    Conny, J M; Norris, G A; Gould, T R

    2009-03-01

    Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (sigma(Char)) and BC (sigma(BC)), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for sigma(BC), which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the sigma(BC) and sigma(Char) surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 degrees C and 850 degrees C as most suitable for the helium high-temperature step lasting 150s. However, a temperature as low as 750 degrees C could not be rejected statistically. PMID:19216871

  11. Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

    SciTech Connect

    ROMERO,VICENTE J.

    2000-05-04

    In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

  12. Nuclear excitation by electron transition rate confidence interval in a Hg201 local thermodynamic equilibrium plasma

    NASA Astrophysics Data System (ADS)

    Comet, M.; Gosselin, G.; Méot, V.; Morel, P.; Pain, J.-C.; Denis-Petit, D.; Gobet, F.; Hannachi, F.; Tarisien, M.; Versteegen, M.

    2015-11-01

    Nuclear excitation by electron transition (NEET) is predicted to be the dominant excitation process of the first Hg201 isomeric state in a laser heated plasma. This process may occur when the energy difference between a nuclear transition and an atomic transition is close to zero, provided the quantum selection rules are fulfilled. At local thermodynamic equilibrium, an average atom model may be used, in a first approach, to evaluate the NEET rate in plasma. The statistical nature of the electronic transition spectrum is then described by the means of a Gaussian distribution around the average atom configuration. However, using a continuous function to describe the electronic spectrum is questionable in the framework of a resonant process, such as NEET. In order to get an idea of when it can be relied upon to predict a NEET rate in plasma, we present in this paper a NEET rate calculation using a model derived from detailed configuration accounting. This calculation allows us to define a confidence interval of the NEET rate around its average atom mean value, which is the first step to design a future experiment.

  13. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. PMID:25817475

  14. Confidence bounds on structural reliability

    NASA Technical Reports Server (NTRS)

    Mehta, S. R.; Cruse, T. A.; Mahadevan, S.

    1993-01-01

    Different approaches for quantifying physical, statistical, and model uncertainties associated with the distribution parameters which are aimed at determining structural reliability are described. Confidence intervals on the distribution parameters of the input random variables are estimated using four algorithms to evaluate uncertainty of the response. Design intervals are evaluated using either Monte Carlo simulation or an iterative approach. A first order approach can be used to compute a first approximation of the design interval, but its accuracy is not satisfactory. The regression approach which combines the iterative approach with Monte Carlo simulation is capable of providing good results if the performance function can be accurately represented using regression analysis. It is concluded that the design interval-based approach seems to be quite general and takes into account distribution and model uncertainties.

  15. Confidant Relations in Italy.

    PubMed

    Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

    2015-02-01

    Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

  16. Confidant Relations in Italy

    PubMed Central

    Isaacs, Jenny; Soglian, Francesca; Hoffman, Edward

    2015-01-01

    Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91%) reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture. PMID:27247641

  17. Interval Estimates of Multivariate Effect Sizes: Coverage and Interval Width Estimates under Variance Heterogeneity and Nonnormality

    ERIC Educational Resources Information Center

    Hess, Melinda R.; Hogarty, Kristine Y.; Ferron, John M.; Kromrey, Jeffrey D.

    2007-01-01

    Monte Carlo methods were used to examine techniques for constructing confidence intervals around multivariate effect sizes. Using interval inversion and bootstrapping methods, confidence intervals were constructed around the standard estimate of Mahalanobis distance (D[superscript 2]), two bias-adjusted estimates of D[superscript 2], and Huberty's…

  18. Application of Sequential Interval Estimation to Adaptive Mastery Testing

    ERIC Educational Resources Information Center

    Chang, Yuan-chin Ivan

    2005-01-01

    In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…

  19. Interval estimates and their precision

    NASA Astrophysics Data System (ADS)

    Marek, Luboš; Vrabec, Michal

    2015-06-01

    A task very often met in in practice is computation of confidence interval bounds for the relative frequency within sampling without replacement. A typical situation includes preelection estimates and similar tasks. In other words, we build the confidence interval for the parameter value M in the parent population of size N on the basis of a random sample of size n. There are many ways to build this interval. We can use a normal or binomial approximation. More accurate values can be looked up in tables. We consider one more method, based on MS Excel calculations. In our paper we compare these different methods for specific values of M and we discuss when the considered methods are suitable. The aim of the article is not a publication of new theoretical methods. This article aims to show that there is a very simple way how to compute the confidence interval bounds without approximations, without tables and without other software costs.

  20. Interval Training.

    ERIC Educational Resources Information Center

    President's Council on Physical Fitness and Sports, Washington, DC.

    Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…

  1. Understanding Academic Confidence

    ERIC Educational Resources Information Center

    Sander, Paul; Sanders, Lalage

    2006-01-01

    This paper draws on the psychological theories of self-efficacy and the self-concept to understand students' self-confidence in academic study in higher education as measured by the Academic Behavioural Confidence scale (ABC). In doing this, expectancy-value theory and self-efficacy theory are considered and contrasted with self-concept and…

  2. Confidence Intervals for Standardized Linear Contrasts of Means

    ERIC Educational Resources Information Center

    Bonnett, Douglas G.

    2008-01-01

    Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to…

  3. Technological Pedagogical Content Knowledge (TPACK) Literature Using Confidence Intervals

    ERIC Educational Resources Information Center

    Young, Jamaal R.; Young, Jemimah L.; Shaker, Ziad

    2012-01-01

    The validity and reliability of Technological Pedagogical Content Knowledge (TPACK) as a framework to measure the extent to which teachers can teach with technology hinges on the ability to aggregate results across empirical studies. The results of data collected using the survey of pre-service teacher knowledge of teaching with technology (TKTT)…

  4. Epidemiology and the law: courts and confidence intervals.

    PubMed Central

    Christoffel, T; Teret, S P

    1991-01-01

    Beginning with the swine flu litigation of the early 1980s, epidemiological evidence has played an increasingly prominent role in helping the nation's courts deal with alleged causal connections between plaintiffs' diseases or other harm and exposure to specific noxious agents (such as asbestos, toxic waste, radiation, and pharmaceuticals). Judicial reliance on epidemiology has high-lighted the contrast between the nature of scientific proof and of legal proof. Epidemiologists need to recognize and understand the growing involvement of their profession in complex tort litigation. PMID:1746668

  5. Estimation of Confidence Intervals for Multiplication and Efficiency

    SciTech Connect

    Verbeke, J

    2009-07-17

    Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoretical count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.

  6. Combining one-sample confidence procedures for inference in the two-sample case.

    PubMed

    Fay, Michael P; Proschan, Michael A; Brittain, Erica

    2015-03-01

    We present a simple general method for combining two one-sample confidence procedures to obtain inferences in the two-sample problem. Some applications give striking connections to established methods; for example, combining exact binomial confidence procedures gives new confidence intervals on the difference or ratio of proportions that match inferences using Fisher's exact test, and numeric studies show the associated confidence intervals bound the type I error rate. Combining exact one-sample Poisson confidence procedures recreates standard confidence intervals on the ratio, and introduces new ones for the difference. Combining confidence procedures associated with one-sample t-tests recreates the Behrens-Fisher intervals. Other applications provide new confidence intervals with fewer assumptions than previously needed. For example, the method creates new confidence intervals on the difference in medians that do not require shift and continuity assumptions. We create a new confidence interval for the difference between two survival distributions at a fixed time point when there is independent censoring by combining the recently developed beta product confidence procedure for each single sample. The resulting interval is designed to guarantee coverage regardless of sample size or censoring distribution, and produces equivalent inferences to Fisher's exact test when there is no censoring. We show theoretically that when combining intervals asymptotically equivalent to normal intervals, our method has asymptotically accurate coverage. Importantly, all situations studied suggest guaranteed nominal coverage for our new interval whenever the original confidence procedures themselves guarantee coverage. PMID:25274182

  7. Combining One-Sample Confidence Procedures for Inference in the Two-Sample Case

    PubMed Central

    Fay, Michael P.; Proschan, Michael A.; Brittain, Erica

    2016-01-01

    Summary We present a simple general method for combining two one-sample confidence procedures to obtain inferences in the two-sample problem. Some applications give striking connections to established methods; for example, combining exact binomial confidence procedures gives new confidence intervals on the difference or ratio of proportions that match inferences using Fisher’s exact test, and numeric studies show the associated confidence intervals bound the type I error rate. Combining exact one-sample Poisson confidence procedures recreates standard confidence intervals on the ratio, and introduces new ones for the difference. Combining confidence procedures associated with one-sample t-tests recreates the Behrens-Fisher intervals. Other applications provide new confidence intervals with fewer assumptions than previously needed. For example, the method creates new confidence intervals on the difference in medians that do not require shift and continuity assumptions. We create a new confidence interval for the difference between two survival distributions at a fixed time point when there is independent censoring by combining the recently developed beta product confidence procedure for each single sample. The resulting interval is designed to guarantee coverage regardless of sample size or censoring distribution, and produces equivalent inferences to Fisher’s exact test when there is no censoring. We show theoretically that when combining intervals asymptotically equivalent to normal intervals, our method has asymptotically accurate coverage. Importantly, all situations studied suggest guaranteed nominal coverage for our new interval whenever the original confidence procedures themselves guarantee coverage. PMID:25274182

  8. Neurophysiology of perceived confidence.

    PubMed

    Graziano, Martin; Parra, Lucas C; Sigman, Mariano

    2010-01-01

    In a partial report paradigm, subjects observe during a brief presentation a cluttered field and after some time - typically ranging from 100 ms to a second - are asked to report a subset of the presented elements. A vast buffer of information is transiently available to be broadcasted which, if not retrieved in time, fades rapidly without reaching consciousness. An interesting feature of this experiment is that objective performance and subjective confidence is decoupled. This converts this paradigm in an ideal vehicle to understand the brain dynamics of the construction of confidence. Here we report a high-density EEG experiment in which we infer elements of the EEG response which are indicative of subjective confidence. We find that an early response during encoding partially correlates with perceived confidence. However, the bulk of the weight of subjective confidence is determined during a late, N400-like waveform, during the retrieval stage. This shows that we can find markers of access to internal, subjective states, that are uncoupled from objective response and stimulus properties of the task, and we propose that this can be used with decoding methods of EEG to infer subjective mental states. PMID:21096220

  9. Confidence Calculation with AMV+

    SciTech Connect

    Fossum, A.F.

    1999-02-19

    The iterative advanced mean value algorithm (AMV+), introduced nearly ten years ago, is now widely used as a cost-effective probabilistic structural analysis tool when the use of sampling methods is cost prohibitive (Wu et al., 1990). The need to establish confidence bounds on calculated probabilities arises because of the presence of uncertainties in measured means and variances of input random variables. In this paper an algorithm is proposed that makes use of the AMV+ procedure and analytically derived probability sensitivities to determine confidence bounds on calculated probabilities.

  10. Interbirth intervals

    PubMed Central

    Haig, David

    2014-01-01

    Background and objectives: Interbirth intervals (IBIs) mediate a trade-off between child number and child survival. Life history theory predicts that the evolutionarily optimal IBI differs for different individuals whose fitness is affected by how closely a mother spaces her children. The objective of the article is to clarify these conflicts and explore their implications for public health. Methodology: Simple models of inclusive fitness and kin conflict address the evolution of human birth-spacing. Results: Genes of infants generally favor longer intervals than genes of mothers, and infant genes of paternal origin generally favor longer IBIs than genes of maternal origin. Conclusions and implications: The colonization of maternal bodies by offspring cells (fetal microchimerism) raises the possibility that cells of older offspring could extend IBIs by interfering with the implantation of subsequent embryos. PMID:24480612

  11. Predicting Systemic Confidence

    ERIC Educational Resources Information Center

    Falke, Stephanie Inez

    2009-01-01

    Using a mixed method approach, this study explored which educational factors predicted systemic confidence in master's level marital and family therapy (MFT) students, and whether or not the impact of these factors was influenced by student beliefs and their perception of their supervisor's beliefs about the value of systemic practice. One hundred…

  12. SystemConfidence

    2012-09-25

    SystemConfidence is a benchmark developed at ORNL which can measure statistical variation in which the user can plot. The portions of the code which manage the collection of the histograms and computing statistics on the histograms were designed with the intent that we could use these functions in other codes.

  13. Computing Graphical Confidence Bounds

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Approximation for graphical confidence bounds is simple enough to run on programmable calculator. Approximation is used in lieu of numerical tables not always available, and exact calculations, which often require rather sizable computer resources. Approximation verified for collection of up to 50 data points. Method used to analyze tile-strength data on Space Shuttle thermal-protection system.

  14. Adding Confidence to Knowledge

    ERIC Educational Resources Information Center

    Goodson, Ludwika Aniela; Slater, Don; Zubovic, Yvonne

    2015-01-01

    A "knowledge survey" and a formative evaluation process led to major changes in an instructor's course and teaching methods over a 5-year period. Design of the survey incorporated several innovations, including: a) using "confidence survey" rather than "knowledge survey" as the title; b) completing an…

  15. Reliability and Confidence.

    ERIC Educational Resources Information Center

    Test Service Bulletin, 1952

    1952-01-01

    Some aspects of test reliability are discussed. Topics covered are: (1) how high should a reliability coefficient be?; (2) two factors affecting the interpretation of reliability coefficients--range of talent and interval between testings; (3) some common misconceptions--reliability of speed tests, part vs. total reliability, reliability for what…

  16. Reclaim your creative confidence.

    PubMed

    Kelley, Tom; Kelley, David

    2012-12-01

    Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with. PMID:23227579

  17. Confidence bounds for nonlinear dose-response relationships.

    PubMed

    Baayen, C; Hougaard, P

    2015-11-30

    An important aim of drug trials is to characterize the dose-response relationship of a new compound. Such a relationship can often be described by a parametric (nonlinear) function that is monotone in dose. If such a model is fitted, it is useful to know the uncertainty of the fitted curve. It is well known that Wald confidence intervals are based on linear approximations and are often unsatisfactory in nonlinear models. Apart from incorrect coverage rates, they can be unreasonable in the sense that the lower confidence limit of the difference to placebo can be negative, even when an overall test shows a significant positive effect. Bootstrap confidence intervals solve many of the problems of the Wald confidence intervals but are computationally intensive and prone to undercoverage for small sample sizes. In this work, we propose a profile likelihood approach to compute confidence intervals for the dose-response curve. These confidence bounds have better coverage than Wald intervals and are more precise and generally faster than bootstrap methods. Moreover, if monotonicity is assumed, the profile likelihood approach takes this automatically into account. The approach is illustrated using a public dataset and simulations based on the Emax and sigmoid Emax models. PMID:26112765

  18. Simulation integration with confidence

    NASA Astrophysics Data System (ADS)

    Strelich, Tom; Stalcup, Bruce W.

    1999-07-01

    Current financial, schedule and risk constraints mandate reuse of software components when building large-scale simulations. While integration of simulation components into larger systems is a well-understood process, it is extremely difficult to do while ensuring that the results are correct. Illgen Simulation Technologies Incorporated and Litton PRC have joined forces to provide tools to integrate simulations with confidence. Illgen Simulation Technologies has developed an extensible and scaleable, n-tier, client- server, distributed software framework for integrating legacy simulations, models, tools, utilities, and databases. By utilizing the Internet, Java, and the Common Object Request Brokering Architecture as the core implementation technologies, the framework provides built-in scalability and extensibility.

  19. Improved investor confidence

    SciTech Connect

    Anderson, J.

    1995-10-01

    Results of a financial ranking survey of power projects show reasonably strong activity when compared to previous surveys. Perhaps the most notable trend is the continued increase in the number of international deals being reported. Nearly 62 percent of the transactions reported were for non-US projects. This increase will likely expand with time as developers and lenders gain confidence in certain regions. For the remainder of 1995 and into 1996 it is likely that financial activity will continue at a steady pace. A number of projects in various markets are poised to reach financial close relatively soon. Developers, investment bankers, and governments are all gaining experience and becoming more comfortable with the process.

  20. Optimally combined confidence limits

    NASA Astrophysics Data System (ADS)

    Janot, P.; Le Diberder, F.

    1998-02-01

    An analytical and optimal procedure to combine statistically independent sets of confidence levels on a quantity is presented. This procedure does not impose any constraint on the methods followed by each analysis to derive its own limit. It incorporates the a priori statistical power of each of the analyses to be combined, in order to optimize the overall sensitivity. It can, in particular, be used to combine the mass limits obtained by several analyses searching for the Higgs boson in different decay channels, with different selection efficiencies, mass resolution and expected background. It can also be used to combine the mass limits obtained by several experiments (e.g. ALEPH, DELPHI, L3 and OPAL, at LEP 2) independently of the method followed by each of these experiments to derive their own limit. A method to derive the limit set by one analysis is also presented, along with an unbiased prescription to optimize the expected mass limit in the no-signal-hypothesis.

  1. Ellenore Flood's Skills Confidence Inventory.

    ERIC Educational Resources Information Center

    Subich, Linda Mezydlo

    1998-01-01

    Presents background information on the Skills Confidence Inventory (SCI) and the construct it assesses. Interprets the skills confidence and interest profiles of a 29-year-old female high school teacher using the SCI and the Strong Interest Inventory. (MKA)

  2. Confidence in Numerical Simulations

    SciTech Connect

    Hemez, Francois M.

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  3. Confidence and Cognitive Test Performance

    ERIC Educational Resources Information Center

    Stankov, Lazar; Lee, Jihyun

    2008-01-01

    This article examines the nature of confidence in relation to abilities, personality, and metacognition. Confidence scores were collected during the administration of Reading and Listening sections of the Test of English as a Foreign Language Internet-Based Test (TOEFL iBT) to 824 native speakers of English. Those confidence scores were correlated…

  4. Monitoring tigers with confidence.

    PubMed

    Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark

    2010-12-01

    With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. PMID:21392352

  5. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  6. Why Aren't They Called Probability Intervals?

    ERIC Educational Resources Information Center

    Devlin, Thomas F.

    2008-01-01

    This article offers suggestions for teaching confidence intervals, a fundamental statistical tool often misinterpreted by beginning students. A historical perspective presenting the interpretation given by their inventor is supported with examples and the use of technology. A method for determining confidence intervals for the seldom-discussed…

  7. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  8. Measuring Vaccine Confidence: Introducing a Global Vaccine Confidence Index

    PubMed Central

    Larson, Heidi J; Schulz, William S; Tucker, Joseph D; Smith, David M D

    2015-01-01

    Background. Public confidence in vaccination is vital to the success of immunisation programmes worldwide. Understanding the dynamics of vaccine confidence is therefore of great importance for global public health. Few published studies permit global comparisons of vaccination sentiments and behaviours against a common metric. This article presents the findings of a multi-country survey of confidence in vaccines and immunisation programmes in Georgia, India, Nigeria, Pakistan, and the United Kingdom (UK) – these being the first results of a larger project to map vaccine confidence globally. Methods. Data were collected from a sample of the general population and from those with children under 5 years old against a core set of confidence questions. All surveys were conducted in the relevant local-language in Georgia, India, Nigeria, Pakistan, and the UK. We examine confidence in immunisation programmes as compared to confidence in other government health services, the relationships between confidence in the system and levels of vaccine hesitancy, reasons for vaccine hesitancy, ultimate vaccination decisions, and their variation based on country contexts and demographic factors. Results. The numbers of respondents by country were: Georgia (n=1000); India (n=1259); Pakistan (n=2609); UK (n=2055); Nigerian households (n=12554); and Nigerian health providers (n=1272). The UK respondents with children under five years of age were more likely to hesitate to vaccinate, compared to other countries. Confidence in immunisation programmes was more closely associated with confidence in the broader health system in the UK (Spearman’s ρ=0.5990), compared to Nigeria (ρ=0.5477), Pakistan (ρ=0.4491), and India (ρ=0.4240), all of which ranked confidence in immunisation programmes higher than confidence in the broader health system. Georgia had the highest rate of vaccine refusals (6 %) among those who reported initial hesitation. In all other countries surveyed most

  9. Comparison of confidence procedures for type I censored exponential lifetimes.

    PubMed

    Sundberg, R

    2001-12-01

    In the model of type I censored exponential lifetimes, coverage probabilities are compared for a number of confidence interval constructions proposed in literature. The coverage probabilities are calculated exactly for sample sizes up to 50 and for different degrees of censoring and different degrees of intended confidence. If not only a fair two-sided coverage is desired, but also fair one-sided coverages, only few methods are quite satisfactory. A likelihood-based interval and a third root transformation to normality work almost perfectly, but the chi 2-based method that is perfect under no censoring and under type II censoring can also be advocated. PMID:11763546

  10. Confidant Relations of the Aged.

    ERIC Educational Resources Information Center

    Tigges, Leann M.; And Others

    The confidant relationship is a qualitatively distinct dimension of the emotional support system of the aged, yet the composition of the confidant network has been largely neglected in research on aging. Persons (N=940) 60 years of age and older were interviewed about their socio-environmental setting. From the enumeration of their relatives,…

  11. Predicting confidence in flashbulb memories.

    PubMed

    Day, Martin V; Ross, Michael

    2014-01-01

    Years after a shocking news event many people confidently report details of their flashbulb memories (e.g., what they were doing). People's confidence is a defining feature of their flashbulb memories, but it is not well understood. We tested a model that predicted confidence in flashbulb memories. In particular we examined whether people's social bond with the target of a news event predicts confidence. At a first session shortly after the death of Michael Jackson participants reported their sense of attachment to Michael Jackson, as well as their flashbulb memories and emotional and other reactions to Jackson's death. At a second session approximately 18 months later they reported their flashbulb memories and confidence in those memories. Results supported our proposed model. A stronger sense of attachment to Jackson was related to reports of more initial surprise, emotion, and rehearsal during the first session. Participants' bond with Michael Jackson predicted their confidence but not the consistency of their flashbulb memories 18 months later. We also examined whether participants' initial forecasts regarding the persistence of their flashbulb memories predicted the durability of their memories. Participants' initial forecasts were more strongly related to participants' subsequent confidence than to the actual consistency of their memories. PMID:23496003

  12. Confidence rating for eutrophication assessments.

    PubMed

    Brockmann, Uwe H; Topcu, Dilek H

    2014-05-15

    Confidence of monitoring data is dependent on their variability and representativeness of sampling in space and time. Whereas variability can be assessed as statistical confidence limits, representative sampling is related to equidistant sampling, considering gradients or changing rates at sampling gaps. By the proposed method both aspects are combined, resulting in balanced results for examples of total nitrogen concentrations in the German Bight/North Sea. For assessing sampling representativeness surface areas, vertical profiles and time periods are divided into regular sections for which individually the representativeness is calculated. The sums correspond to the overall representativeness of sampling in the defined area/time period. Effects of not sampled sections are estimated along parallel rows by reducing their confidence, considering their distances to next sampled sections and the interrupted gradients/changing rates. Confidence rating of time sections is based on maximum differences of sampling rates at regular time steps and related means of concentrations. PMID:24680718

  13. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  14. QT interval in anorexia nervosa.

    PubMed Central

    Cooke, R A; Chambers, J B; Singh, R; Todd, G J; Smeeton, N C; Treasure, J; Treasure, T

    1994-01-01

    OBJECTIVES--To determine the incidence of a long QT interval as a marker for sudden death in patients with anorexia nervosa and to assess the effect of refeeding. To define a long QT interval by linear regression analysis and estimation of the upper limit of the confidence interval (95% CI) and to compare this with the commonly used Bazett rate correction formula. DESIGN--Prospective case control study. SETTING--Tertiary referral unit for eating disorders. SUBJECTS--41 consecutive patients with anorexia nervosa admitted over an 18 month period. 28 age and sex matched normal controls. MAIN OUTCOME MEASURES--maximum QT interval measured on 12 lead electrocardiograms. RESULTS--43.6% of the variability in the QT interval was explained by heart rate alone (p < 0.00001) and group analysis contributed a further 5.9% (p = 0.004). In 6 (15%) patients the QT interval was above the upper limit of the 95% CI for the prediction based on the control equation (NS). Two patients died suddenly; both had a QT interval at or above the upper limit of the 95% CI. In patients who reached their target weights the QT interval was significantly shorter (median 9.8 ms; p = 0.04) relative to the upper limit of the 60% CI of the control regression line, which best discriminated between patients and controls. The median Bazett rate corrected QT interval (QTc) in patients and controls was 435 v 405 ms.s-1/2 (p = 0.0004), and before and after refeeding it was 435 v 432 ms.s1/2 (NS). In 14(34%) patients and three (11%) controls the QTc was > 440 ms.s-1/2 (p = 0.053). CONCLUSIONS--The QT interval was longer in patients with anorexia nervosa than in age and sex matched controls, and there was a significant tendency to reversion to normal after refeeding. The Bazett rate correction formula overestimated the number of patients with QT prolongation and also did not show an improvement with refeeding. PMID:8068473

  15. What Confidence Should Boards Give No-Confidence Votes?

    ERIC Educational Resources Information Center

    MacTaggart, Terrence

    2012-01-01

    As boards and presidents are increasingly in the vanguard of change that disturbs the status quo, they may also find themselves the targets of expressions of concern, censure, and no confidence from faculty members who may be averse to a new order of things or to the manner of bringing it about. Since presidents or other chief executives are…

  16. Targeting Low Career Confidence Using the Career Planning Confidence Scale

    ERIC Educational Resources Information Center

    McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

    2006-01-01

    The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

  17. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making

    PubMed Central

    Bahrami, Bahador; Latham, Peter E.

    2015-01-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people’s confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality. PMID:26517475

  18. Addressing the vaccine confidence gap.

    PubMed

    Larson, Heidi J; Cooper, Louis Z; Eskola, Juhani; Katz, Samuel L; Ratzan, Scott

    2011-08-01

    Vaccines--often lauded as one of the greatest public health interventions--are losing public confidence. Some vaccine experts have referred to this decline in confidence as a crisis. We discuss some of the characteristics of the changing global environment that are contributing to increased public questioning of vaccines, and outline some of the specific determinants of public trust. Public decision making related to vaccine acceptance is neither driven by scientific nor economic evidence alone, but is also driven by a mix of psychological, sociocultural, and political factors, all of which need to be understood and taken into account by policy and other decision makers. Public trust in vaccines is highly variable and building trust depends on understanding perceptions of vaccines and vaccine risks, historical experiences, religious or political affiliations, and socioeconomic status. Although provision of accurate, scientifically based evidence on the risk-benefit ratios of vaccines is crucial, it is not enough to redress the gap between current levels of public confidence in vaccines and levels of trust needed to ensure adequate and sustained vaccine coverage. We call for more research not just on individual determinants of public trust, but on what mix of factors are most likely to sustain public trust. The vaccine community demands rigorous evidence on vaccine efficacy and safety and technical and operational feasibility when introducing a new vaccine, but has been negligent in demanding equally rigorous research to understand the psychological, social, and political factors that affect public trust in vaccines. PMID:21664679

  19. Confidence-Based Feature Acquisition

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

    2010-01-01

    Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

  20. Overconfidence in Interval Estimates: What Does Expertise Buy You?

    ERIC Educational Resources Information Center

    McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan

    2008-01-01

    People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…

  1. Test-Retest Reliability and Concurrent Validity of the Expanded Skills Confidence Inventory

    ERIC Educational Resources Information Center

    Robinson, Carrie H.; Betz, Nancy E.

    2004-01-01

    This study examined the test-retest reliability and the concurrent validity of the 17-scale Expanded Skills Confidence Inventory in samples of 321 and 175 college students. Retest values over a 3-week interval ranged from .77 to .89, with a median of .85. Using Brown and Gore's C-index, evidence for the concurrent validity of confidence score…

  2. Cultural Influences on Confidence: Country and Gender.

    ERIC Educational Resources Information Center

    Lundeberg, Mary A.; Fox, Paul W.; Brown, Amy C.; Elbedour, Salman

    2000-01-01

    Investigates gender differences in confidence judgments when they were correct and incorrect on exam items with postsecondary students (N=551) in five countries. Large and significant differences were found in overall confidence, confidence when correct, and confidence when wrong, associated primarily with country and culture. In contrast, gender…

  3. Programming with Intervals

    NASA Astrophysics Data System (ADS)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  4. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    PubMed

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand. PMID:24334324

  5. Confidence in ASCI scientific simulations

    SciTech Connect

    Ang, J.A.; Trucano, T.G.; Luginbuhl, D.R.

    1998-06-01

    The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.

  6. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    PubMed Central

    Taylor, Douglas J.; Muller, Keith E.

    2013-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting uncertainty associated with such point estimates. Previous authors studied an asymptotically unbiased method of obtaining confidence intervals for noncentrality and power of the general linear univariate model in this setting. We provide exact confidence intervals for noncentrality, power, and sample size. Such confidence intervals, particularly one-sided intervals, help in planning a future study and in evaluating existing studies. PMID:24039272

  7. Interval polynomial positivity

    NASA Technical Reports Server (NTRS)

    Bose, N. K.; Kim, K. D.

    1989-01-01

    It is shown that a univariate interval polynomial is globally positive if and only if two extreme polynomials are globally positive. It is shown that the global positivity property of a bivariate interval polynomial is completely determined by four extreme bivariate polynomials. The cardinality of the determining set for k-variate interval polynomials is 2k. One of many possible generalizations, where vertex implication for global positivity holds, is made by considering the parameter space to be the set dual of a boxed domain.

  8. A Mathematical Framework for Statistical Decision Confidence.

    PubMed

    Hangya, Balázs; Sanders, Joshua I; Kepecs, Adam

    2016-09-01

    Decision confidence is a forecast about the probability that a decision will be correct. From a statistical perspective, decision confidence can be defined as the Bayesian posterior probability that the chosen option is correct based on the evidence contributing to it. Here, we used this formal definition as a starting point to develop a normative statistical framework for decision confidence. Our goal was to make general predictions that do not depend on the structure of the noise or a specific algorithm for estimating confidence. We analytically proved several interrelations between statistical decision confidence and observable decision measures, such as evidence discriminability, choice, and accuracy. These interrelationships specify necessary signatures of decision confidence in terms of externally quantifiable variables that can be empirically tested. Our results lay the foundations for a mathematically rigorous treatment of decision confidence that can lead to a common framework for understanding confidence across different research domains, from human and animal behavior to neural representations. PMID:27391683

  9. Reducing overconfidence in the interval judgments of experts.

    PubMed

    Speirs-Bridge, Andrew; Fidler, Fiona; McBride, Marissa; Flander, Louisa; Cumming, Geoff; Burgman, Mark

    2010-03-01

    Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3-point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4-step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta-analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)-a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4-step procedure is more likely to reduce overconfidence than the 3-point procedure (Cohen's d = 0.61, [0.04, 1.18]). PMID:20030766

  10. Item-Specific Gender Differences in Confidence.

    ERIC Educational Resources Information Center

    Foote, Chandra J.

    Very little research has been performed which examines gender differences in confidence in highly specified situations. More generalized studies consistently suggest that women are less confident than men (i.e. Sadker and Sadker, 1994). The few studies of gender differences in item-specific conditions indicate that men tend to be more confident in…

  11. Confidence in Science: The Gender Gap.

    ERIC Educational Resources Information Center

    Fox, Mary Frank; Firebaugh, Glenn

    1992-01-01

    Analyses relationship between gender and confidence in science. Argues that, as women form larger part of labor force and tax base, scientific fields must seek to increase women's generally lower levels of confidence in science. Reports no change in trend of confidence in science between 1973 and 1989, but shows significant and widening gap…

  12. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  13. Dynamics of postdecisional processing of confidence.

    PubMed

    Yu, Shuli; Pleskac, Timothy J; Zeigenfuse, Matthew D

    2015-04-01

    Most cognitive theories assume that confidence and choice happen simultaneously and are based on the same information. The 3 studies presented in this article instead show that confidence judgments can arise, at least in part, from a postdecisional evidence accumulation process. As a result of this process, increasing the time between making a choice and confidence judgment improves confidence resolution. This finding contradicts the notion that confidence judgments are biased by decision makers seeking confirmatory evidence. Further analysis reveals that the improved resolution is due to a reduction in confidence in incorrect responses, while confidence in correct responses remains relatively constant. These results are modeled with a sequential sampling process that allows evidence accumulation to continue after a choice is made and maps the amount of accumulated evidence onto a confidence rating. The cognitive modeling analysis reveals that the rate of evidence accumulation following a choice does slow relative to the rate preceding choice. The analysis also shows that the asymmetry between confidence in correct and incorrect choices is compatible with state-dependent decay in the accumulated evidence: Evidence consistent with the current state results in a deceleration of accumulated evidence and consequently evidence appears to have a decreasing impact on observed confidence. In contrast, evidence inconsistent with the current state results in an acceleration of accumulated evidence toward the opposite direction and consequently evidence appears to have an increasing impact on confidence. Taken together, this process-level understanding of confidence suggests a simple strategy for improving confidence accuracy: take a bit more time to make confidence judgments. PMID:25844627

  14. Proper Interval Vertex Deletion

    NASA Astrophysics Data System (ADS)

    Villanger, Yngve

    Deleting a minimum number of vertices from a graph to obtain a proper interval graph is an NP-complete problem. At WG 2010 van Bevern et al. gave an O((14k + 14) k + 1 kn 6) time algorithm by combining iterative compression, branching, and a greedy algorithm. We show that there exists a simple greedy O(n + m) time algorithm that solves the Proper Interval Vertex Deletion problem on \\{claw,net,allowbreak tent,allowbreak C_4,C_5,C_6\\}-free graphs. Combining this with branching on the forbidden structures claw,net,tent,allowbreak C_4,C_5, and C 6 enables us to get an O(kn 6 6 k ) time algorithm for Proper Interval Vertex Deletion, where k is the number of deleted vertices.

  15. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  16. Considering Teaching History and Calculating Confidence Intervals in Student Evaluations of Teaching Quality

    ERIC Educational Resources Information Center

    Fraile, Rubén; Bosch-Morell, Francisco

    2015-01-01

    Lecturer promotion and tenure decisions are critical both for university management and for the affected lecturers. Therefore, they should be made cautiously and based on reliable information. Student evaluations of teaching quality are among the most used and analysed sources of such information. However, to date little attention has been paid in…

  17. Improving Content Validation Studies Using an Asymmetric Confidence Interval for the Mean of Expert Ratings

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Miller, Jeffrey M.

    2004-01-01

    As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the…

  18. A Comparison of Composite Reliability Estimators: Coefficient Omega Confidence Intervals in the Current Literature

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2016-01-01

    Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…

  19. Joint one-sided and two-sided simultaneous confidence intervals.

    PubMed

    Braat, S; Gerhard, D; Hothorn, L A

    2008-01-01

    For the analysis of multiarmed clinical trials often a set consisting of a mixture of one- and two-sided tests can be preferred over a set of common two-sided hypotheses settings. Here we show the straightforward application of existing multiple comparison procedures for the difference and ratio of normally distributed means to complex trial designs, involving one and two test directions. The proposed contrast tests provide a more flexible framework than the existing methods at nearly similar power. An application is illustrated for an example with multiple treatment doses and two active controls; statistical software codes are included for R and SAS System. PMID:18327722

  20. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  1. Bootstrap Standard Error and Confidence Intervals for the Correlation Corrected for Range Restriction: A Simulation Study

    ERIC Educational Resources Information Center

    Chan, Wai; Chan, Daniel W.-L.

    2004-01-01

    The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ?, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, r-sub(c), has been recommended, and a standard formula based on asymptotic results for estimating its standard…

  2. Bootstrap Standard Error and Confidence Intervals for the Difference between Two Squared Multiple Correlation Coefficients

    ERIC Educational Resources Information Center

    Chan, Wai

    2009-01-01

    A typical question in multiple regression analysis is to determine if a set of predictors gives the same degree of predictor power in two different populations. Olkin and Finn (1995) proposed two asymptotic-based methods for testing the equality of two population squared multiple correlations, [rho][superscript 2][subscript 1] and…

  3. Reliability Generalization: The Importance of Considering Sample Specificity, Confident Intervals, and Subgroup Differences.

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Daniel, Larry G.

    The purposes of this paper are to identify common errors made by researchers when dealing with reliability coefficients and to outline best practices for reporting and interpreting reliability coefficients. Common errors that researchers make are: (1) stating that the instruments are reliable; (2) incorrectly interpreting correlation coefficients;…

  4. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  5. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

    ERIC Educational Resources Information Center

    Capraro, Robert M.

    2004-01-01

    With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

  6. A recipe for the construction of confidence limits

    SciTech Connect

    Iain A Bertram et al.

    2000-04-12

    In this note, the authors present the recipe recommended by the Search Limits Committee for the construction of confidence intervals for the use of D0 collaboration. In another note, currently in preparation, they present the rationale for this recipe, a critique of the current literature on this topic, and several examples of the use of the method. This note is intended to fill the need of the collaboration to have a reference available until the more complete note is finished. Section 2 introduces the notation used in this note, and Section 3 contains the suggested recipe.

  7. Preservice Educators' Confidence in Addressing Sexuality Education

    ERIC Educational Resources Information Center

    Wyatt, Tammy Jordan

    2009-01-01

    This study examined 328 preservice educators' level of confidence in addressing four sexuality education domains and 21 sexuality education topics. Significant differences in confidence levels across the four domains were found for gender, academic major, sexuality education philosophy, and sexuality education knowledge. Preservice educators…

  8. Gender, Family Structure, and Adolescents' Primary Confidants

    ERIC Educational Resources Information Center

    Nomaguchi, Kei M.

    2008-01-01

    Using data from the National Longitudinal Survey of Youth 1997 (N = 4,190), this study examined adolescents' reports of primary confidants. Results showed that nearly 30% of adolescents aged 16-18 nominated mothers as primary confidants, 25% nominated romantic partners, and 20% nominated friends. Nominating romantic partners or friends was related…

  9. Examining Response Confidence in Multiple Text Tasks

    ERIC Educational Resources Information Center

    List, Alexandra; Alexander, Patricia A.

    2015-01-01

    Students' confidence in their responses to a multiple text-processing task and their justifications for those confidence ratings were investigated. Specifically, 215 undergraduates responded to two academic questions, differing by type (i.e., discrete and open-ended) and by domain (i.e., developmental psychology and astrophysics), using a digital…

  10. Self-Confidence and Metacognitive Processes

    ERIC Educational Resources Information Center

    Kleitman, Sabina; Stankov, Lazar

    2007-01-01

    This paper examines the nature of the Self-confidence factor. In particular, we study the relationship between this factor and cognitive, metacognitive, and personality measures. Participants (N=296) were administered a battery of seven cognitive tests that assess three constructs: accuracy, speed, and confidence. Participants were also given the…

  11. Developing confidence decreases guessing and increases competency.

    PubMed

    Center, Deborah L; Adams, Timothy M

    2013-09-01

    Validating competency to meet accreditation and safety demands is a major challenge many organizations face. Traditional testing methods may only reflect guessing and may not capture the amount of misinformation nurses are using for critical decision making. Using a confidence-based learning methodology allows learners to correct misinformation and gain confidence and competency. PMID:24015795

  12. Decision Making and Confidence Given Uncertain Advice

    ERIC Educational Resources Information Center

    Lee, Michael D.; Dry, Matthew J.

    2006-01-01

    We study human decision making in a simple forced-choice task that manipulates the frequency and accuracy of available information. Empirically, we find that people make decisions consistent with the advice provided, but that their subjective confidence in their decisions shows 2 interesting properties. First, people's confidence does not depend…

  13. Confidence and Competence with Mathematical Procedures

    ERIC Educational Resources Information Center

    Foster, Colin

    2016-01-01

    Confidence assessment (CA), in which students state alongside each of their answers a confidence level expressing how certain they are, has been employed successfully within higher education. However, it has not been widely explored with school pupils. This study examined how school mathematics pupils (N?=?345) in five different secondary schools…

  14. Confidence Wagering during Mathematics and Science Testing

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Hoan-Lin; Shymansky, James A.

    2009-01-01

    This proposal presents the results of a case study involving five 8th grade Taiwanese classes, two mathematics and three science classes. These classes used a new method of testing called confidence wagering. This paper advocates the position that confidence wagering can predict the accuracy of a student's test answer selection during…

  15. Hypercorrection of High Confidence Errors in Children

    ERIC Educational Resources Information Center

    Metcalfe, Janet; Finn, Bridgid

    2012-01-01

    Three experiments investigated whether the hypercorrection effect--the finding that errors committed with high confidence are easier, rather than more difficult, to correct than are errors committed with low confidence--occurs in grade school children as it does in young adults. All three experiments showed that Grade 3-6 children hypercorrected…

  16. The Role of Confidence in Lifelong Learning.

    ERIC Educational Resources Information Center

    Norman, Marie; Hyland, Terry

    2003-01-01

    Focuses on the concept of confidence stating that it is commonly misunderstood. Presents suggestions for managing and supporting learning after exploring research related to confidence. Offers information on original research on student teachers learning to teach in the post-school sector. (CMK)

  17. An informative confidence metric for ATR.

    SciTech Connect

    Bow, Wallace Johnston Jr.; Richards, John Alfred; Bray, Brian Kenworthy

    2003-03-01

    Automatic or assisted target recognition (ATR) is an important application of synthetic aperture radar (SAR). Most ATR researchers have focused on the core problem of declaration-that is, detection and identification of targets of interest within a SAR image. For ATR declarations to be of maximum value to an image analyst, however, it is essential that each declaration be accompanied by a reliability estimate or confidence metric. Unfortunately, the need for a clear and informative confidence metric for ATR has generally been overlooked or ignored. We propose a framework and methodology for evaluating the confidence in an ATR system's declarations and competing target hypotheses. Our proposed confidence metric is intuitive, informative, and applicable to a broad class of ATRs. We demonstrate that seemingly similar ATRs may differ fundamentally in the ability-or inability-to identify targets with high confidence.

  18. Interval arithmetic operations for uncertainty analysis with correlated interval variables

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Fu, Chun-Ming; Ni, Bing-Yu; Han, Xu

    2016-08-01

    A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analysis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional parallelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addition, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.

  19. Influence of Nodule Detection Software on Radiologists’ Confidence in Identifying Pulmonary Nodules With Computed Tomography

    PubMed Central

    Nietert, Paul J.; Ravenel, James G.; Taylor, Katherine K.; Silvestri, Gerard A.

    2011-01-01

    Purpose With advances in technology, detection of small pulmonary nodules is increasing. Nodule detection software (NDS) has been developed to assist radiologists with pulmonary nodule diagnosis. Although it may increase sensitivity for small nodules, often there is an accompanying increase in false-positive findings. We designed a study to examine the extent to which computed tomography (CT) NDS influences the confidence of radiologists in identifying small pulmonary nodules. Materials and Methods Eight radiologists (readers) with different levels of experience examined thoracic CT scans of 131 cases and identified all the clinically relevant pulmonary nodules. The reference standard was established by an expert, dedicated thoracic radiologist. For each nodule, the readers recorded nodule size, density, location, and confidence level. Two weeks (or more) later, the readers reinterpreted the same scans; however, this time they were provided marks, when present, as indicated by NDS and asked to reassess their level of confidence. The effect of NDS on changes in reader confidence was assessed using multivariable generalized linear regression models. Results A total of 327 unique nodules were identified. Declines in confidence were significantly (P<0.05) associated with the absence of an NDS mark and smaller nodules (odds ratio=71.0, 95% confidence interval =14.8–339.7). Among nodules with pre-NDS confidence less than 100%, increases in confidence were significantly (P<0.05) associated with the presence of an NDS mark (odds ratio=6.0, 95% confidence interval =2.7–13.6) and larger nodules. Secondary findings showed that NDS did not improve reader diagnostic accuracy. Conclusion Although in this study NDS does not seem to enhance reader accuracy, the confidence of the radiologists in identifying small pulmonary nodules with CT is greatly influenced by NDS. PMID:20498624

  20. Interval-valued random functions and the kriging of intervals

    SciTech Connect

    Diamond, P.

    1988-04-01

    Estimation procedures using data that include some values known to lie within certain intervals are usually regarded as problems of constrained optimization. A different approach is used here. Intervals are treated as elements of a positive cone, obeying the arithmetic of interval analysis, and positive interval-valued random functions are discussed. A kriging formalism for interval-valued data is developed. It provides estimates that are themselves intervals. In this context, the condition that kriging weights be positive is seen to arise in a natural way. A numerical example is given, and the extension to universal kriging is sketched.

  1. Self-Confidence of Selected Indian Students

    ERIC Educational Resources Information Center

    Martin, James C.

    1974-01-01

    The article discusses a study that determined if selected primary and junior high Indian students' self-confidence was related to grade level and to the number of years enrolled in a particular Bureau of Indian Affairs boarding school. (KM)

  2. Weighting Mean and Variability during Confidence Judgments

    PubMed Central

    de Gardelle, Vincent; Mamassian, Pascal

    2015-01-01

    Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

  3. Experimenting with musical intervals

    NASA Astrophysics Data System (ADS)

    Lo Presto, Michael C.

    2003-07-01

    When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

  4. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  5. Confidence regions of planar cardiac vectors

    NASA Technical Reports Server (NTRS)

    Dubin, S.; Herr, A.; Hunt, P.

    1980-01-01

    A method for plotting the confidence regions of vectorial data obtained in electrocardiology is presented. The 90%, 95% and 99% confidence regions of cardiac vectors represented in a plane are obtained in the form of an ellipse centered at coordinates corresponding to the means of a sample selected at random from a bivariate normal distribution. An example of such a plot for the frontal plane QRS mean electrical axis for 80 horses is also presented.

  6. Credible Intervals for Precision and Recall Based on a K-Fold Cross-Validated Beta Distribution.

    PubMed

    Wang, Yu; Li, Jihong

    2016-08-01

    In typical machine learning applications such as information retrieval, precision and recall are two commonly used measures for assessing an algorithm's performance. Symmetrical confidence intervals based on K-fold cross-validated t distributions are widely used for the inference of precision and recall measures. As we confirmed through simulated experiments, however, these confidence intervals often exhibit lower degrees of confidence, which may easily lead to liberal inference results. Thus, it is crucial to construct faithful confidence (credible) intervals for precision and recall with a high degree of confidence and a short interval length. In this study, we propose two posterior credible intervals for precision and recall based on K-fold cross-validated beta distributions. The first credible interval for precision (or recall) is constructed based on the beta posterior distribution inferred by all K data sets corresponding to K confusion matrices from a K-fold cross-validation. Second, considering that each data set corresponding to a confusion matrix from a K-fold cross-validation can be used to infer a beta posterior distribution of precision (or recall), the second proposed credible interval for precision (or recall) is constructed based on the average of K beta posterior distributions. Experimental results on simulated and real data sets demonstrate that the first credible interval proposed in this study almost always resulted in degrees of confidence greater than 95%. With an acceptable degree of confidence, both of our two proposed credible intervals have shorter interval lengths than those based on a corrected K-fold cross-validated t distribution. Meanwhile, the average ranks of these two credible intervals are superior to that of the confidence interval based on a K-fold cross-validated t distribution for the degree of confidence and are superior to that of the confidence interval based on a corrected K-fold cross-validated t distribution for the

  7. Confidence in biopreparedness authorities among Finnish conscripts.

    PubMed

    Vartti, Anne-Marie; Aro, Arja R; Jormanainen, Vesa; Henriksson, Markus; Nikkari, Simo

    2010-08-01

    A large sample of Finnish military conscripts of the armored brigade were questioned on the extent to which they trusted the information given biopreparedness authorities (such as the police, military, health care, and public health institutions) and how confident they were in the authority's ability to protect the public during a potential infectious disease outbreak, from either natural or deliberate causes. Participants answered a written questionnaire during their initial health inspection in July 2007. From a total of 1,000 conscripts, 953 male conscripts returned the questionnaire. The mean sum scores for confidence in the information given to biopreparedness authorities and the media on natural and bioterrorism-related outbreaks (range = 0-30) were 20.14 (SD = 7.79) and 20.12 (SD = 7.69), respectively. Mean sum scores for the respondents' confidence in the ability of the biopreparedness authorities to protect the public during natural and bioterrorism-related outbreaks (range 0-25) were 16.04 (SD = 5.78) and 16.17 (SD = 5.89). Most respondents indicated that during a natural outbreak, they would have confidence in information provided by a health care institution such as central hospitals and primary health care centers, whereas in the case of bioterrorism, the respondents indicated that they would have confidence in the defense forces and central hospitals. PMID:20731266

  8. Chiropractic Interns' Perceptions of Stress and Confidence

    PubMed Central

    Spegman, Adele Mattinat; Herrin, Sean

    2007-01-01

    Objective: Psychological stress has been shown to influence learning and performance among medical and graduate students. Few studies have examined psychological stress in chiropractic students and interns. This preliminary study explored interns' perceptions around stress and confidence at the midpoint of professional training. Methods: This pilot study used a mixed-methods approach, combining rating scales and modified qualitative methods, to explore interns' lived experience. Eighty-eight interns provided ratings of stress and confidence and narrative responses to broad questions. Results: Participants reported multiple sources of stress; stress and confidence ratings were inversely related. Interns described stress as forced priorities, inadequate time, and perceptions of weak performance. Two themes, “convey respect” and “guide real-world learning,” describe faculty actions that minimized stress and promoted confidence. Conclusion: Chiropractic interns experience varying degrees of stress, which is managed with diverse strategies. The development of confidence appears to be influenced by the consistency and manner in which feedback is provided. Although faculty cannot control the amount or sources of stress, awareness of interns' perceptions can strengthen our effectiveness as educators. PMID:18483584

  9. Adaptive Confidence Bands for Nonparametric Regression Functions

    PubMed Central

    Cai, T. Tony; Low, Mark; Ma, Zongming

    2014-01-01

    A new formulation for the construction of adaptive confidence bands in non-parametric function estimation problems is proposed. Confidence bands are constructed which have size that adapts to the smoothness of the function while guaranteeing that both the relative excess mass of the function lying outside the band and the measure of the set of points where the function lies outside the band are small. It is shown that the bands adapt over a maximum range of Lipschitz classes. The adaptive confidence band can be easily implemented in standard statistical software with wavelet support. Numerical performance of the procedure is investigated using both simulated and real datasets. The numerical results agree well with the theoretical analysis. The procedure can be easily modified and used for other nonparametric function estimation models. PMID:26269661

  10. Computation of the intervals of uncertainties about the parameters found for identification

    NASA Technical Reports Server (NTRS)

    Mereau, P.; Raymond, J.

    1982-01-01

    A modeling method to calculate the intervals of uncertainty for parameters found by identification is described. The region of confidence and the general approach to the calculation of these intervals are discussed. The general subprograms for determination of dimensions are described. They provide the organizational charts for the subprograms, the tests carried out and the listings of the different subprograms.

  11. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  12. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    SciTech Connect

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  13. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  14. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  15. The Confidence Factor in Liberal Education

    ERIC Educational Resources Information Center

    Gordon, Daniel

    2012-01-01

    With the US unemployment rate at 9 percent, it's rational for college students to lose confidence in the liberal arts and to opt for a vocational major. Or is it? There is a compelling economic case for the liberal arts. Against those who call for more professional training, liberal educators should concede nothing. However, they do have a…

  16. Mixed Confidence Estimation for Iterative CT Reconstruction.

    PubMed

    Perlmutter, David S; Kim, Soo Mee; Kinahan, Paul E; Alessio, Adam M

    2016-09-01

    Dynamic (4D) CT imaging is used in a variety of applications, but the two major drawbacks of the technique are its increased radiation dose and longer reconstruction time. Here we present a statistical analysis of our previously proposed Mixed Confidence Estimation (MCE) method that addresses both these issues. This method, where framed iterative reconstruction is only performed on the dynamic regions of each frame while static regions are fixed across frames to a composite image, was proposed to reduce computation time. In this work, we generalize the previous method to describe any application where a portion of the image is known with higher confidence (static, composite, lower-frequency content, etc.) and a portion of the image is known with lower confidence (dynamic, targeted, etc). We show that by splitting the image space into higher and lower confidence components, MCE can lower the estimator variance in both regions compared to conventional reconstruction. We present a theoretical argument for this reduction in estimator variance and verify this argument with proof-of-principle simulations. We also propose a fast approximation of the variance of images reconstructed with MCE and confirm that this approximation is accurate compared to analytic calculations of and multi-realization image variance. This MCE method requires less computation time and provides reduced image variance for imaging scenarios where portions of the image are known with more certainty than others allowing for potentially reduced radiation dose and/or improved dynamic imaging. PMID:27008663

  17. Detecting Disease in Radiographs with Intuitive Confidence

    PubMed Central

    Jaeger, Stefan

    2015-01-01

    This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45°) represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned) input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays. PMID:26495433

  18. Sources of Confidence in School Community Councils

    ERIC Educational Resources Information Center

    Nygaard, Richard

    2010-01-01

    Three Utah middle level school community councils participated in a qualitative strengths-based process evaluation. Two of the school community councils were identified as exemplary, and the third was just beginning to function. One aspect of the evaluation was the source of school community council members' confidence. Each school had unique…

  19. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  20. Confident Communication: Speaking Tips for Educators.

    ERIC Educational Resources Information Center

    Parker, Douglas A.

    This resource book seeks to provide the building blocks needed for public speaking while eliminating the fear factor. The book explains how educators can perfect their oratorical capabilities as well as enjoy the security, confidence, and support needed to create and deliver dynamic speeches. Following an Introduction: A Message for Teachers,…

  1. Evaluating Measures of Optimism and Sport Confidence

    ERIC Educational Resources Information Center

    Fogarty, Gerard J.; Perera, Harsha N.; Furst, Andrea J.; Thomas, Patrick R.

    2016-01-01

    The psychometric properties of the Life Orientation Test-Revised (LOT-R), the Sport Confidence Inventory (SCI), and the Carolina SCI (CSCI) were examined in a study involving 260 athletes. The study aimed to test the dimensional structure, convergent and divergent validity, and invariance over competition level of scores generated by these…

  2. Current Developments in Measuring Academic Behavioural Confidence

    ERIC Educational Resources Information Center

    Sander, Paul

    2009-01-01

    Using published findings and by further analyses of existing data, the structure, validity and utility of the Academic Behavioural Confidence scale (ABC) is critically considered. Validity is primarily assessed through the scale's relationship with other existing scales as well as by looking for predicted differences. The utility of the ABC scale…

  3. The Effect of Adaptive Confidence Strategies in Computer-Assisted Instruction on Learning and Learner Confidence

    ERIC Educational Resources Information Center

    Warren, Richard Daniel

    2012-01-01

    The purpose of this research was to investigate the effects of including adaptive confidence strategies in instructionally sound computer-assisted instruction (CAI) on learning and learner confidence. Seventy-one general educational development (GED) learners recruited from various GED learning centers at community colleges in the southeast United…

  4. Highly Confident but Wrong: Gender Differences and Similarities in Confidence Judgments.

    ERIC Educational Resources Information Center

    Lundeberg, Mary A.; And Others

    1994-01-01

    Gender differences in item-specific confidence judgments were studied for 70 male and 181 female college students. Gender differences in confidence were dependent on context and the domain being tested. Both men and women were overconfident, but men were especially overconfident when incorrect. (SLD)

  5. Building Public Confidence in Nuclear Activities

    SciTech Connect

    Isaacs, T

    2002-03-27

    Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site. Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence and some of

  6. Building Public Confidence in Nuclear Activities

    SciTech Connect

    Isaacs, T

    2002-02-13

    Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence, and some of

  7. Random selection as a confidence building tool

    SciTech Connect

    Macarthur, Duncan W; Hauck, Danielle; Langner, Diana; Thron, Jonathan; Smith, Morag; Williams, Richard

    2010-01-01

    Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. The first concern can be addressed by performing the measurements within the host facility using instruments under the host's control. Because the data output in this measurement scenario is also under host control, it is difficult for the monitoring party to have confidence in that data. One technique for addressing this difficulty is random selection. The concept of random selection can be thought of as four steps: (1) The host presents several 'identical' copies of a component or system to the monitor. (2) One (or more) of these copies is randomly chosen by the monitors for use in the measurement system. (3) Similarly, one or more is randomly chosen to be validated further at a later date in a monitor-controlled facility. (4) Because the two components or systems are identical, validation of the 'validation copy' is equivalent to validation of the measurement system. This procedure sounds straightforward, but effective application may be quite difficult. Although random selection is often viewed as a panacea for confidence building, the amount of confidence generated depends on the monitor's continuity of knowledge for both validation and measurement systems. In this presentation, we will discuss the random selection technique, as well as where and how this technique might be applied to generate maximum confidence. In addition, we will discuss the role of modular measurement-system design in facilitating random selection and describe a simple modular measurement system incorporating six small {sup 3}He neutron detectors and a single high-purity germanium gamma detector.

  8. [Birth interval differentials in Rwanda].

    PubMed

    Ilinigumugabo, A

    1992-01-01

    Data from the 1983 Rwanda Fertility Survey are the basis for this study of variations in birth intervals. An analysis of the quality of the Rwandan birth data showed it to be relatively good. The life table technique utilized in this study is explained in a section on methodology, which also describes the Rwanda Fertility Survey questionnaires. A comparison of birth intervals in which live born children died before their first birthday or survived the first birthday shows that infant mortality shortens birth intervals by an average of 5 months. The first birth interval was almost 28 months when the oldest child survived, but declined to 23 months when the oldest child died before age 1. The effect of mortality on birth intervals increased with parity, from 5 months for the first birth interval to 5.5 months for the second and third and 6.4 months for subsequent intervals. The differences amounted to 9 or 10 months for women separating at parities under 4 and over 14 months for women separating at parities of 4 or over. Birth intervals generally increased with parity, maternal age, and the duration of the union. But women entering into unions at higher ages had shorter birth intervals. In the absence of infant mortality and dissolution of the union, women attending school beyong the primary level had first birth intervals 6 months shorter on average than other women. Controlling for infant mortality and marital dissolution, women working for wages had average birth intervals of under 2 years for the first 5 births. Father's occupation had a less marked influence on birth intervals. Urban residence was associated with a shortening of the average birth interval by 6 months between the first and second birth and 5 months between the second and third births. In the first 5 births, Tutsi women had birth intervals 1.5 months longer on average than Hutu women. Women in polygamous unions did not have significantly different birth intervals except perhaps among older women

  9. Lower confidence bound on the percentage improvement in comparing two failure rates

    NASA Astrophysics Data System (ADS)

    Angus, John E.

    1992-06-01

    It is often necessary to determine whether a design change in a product has actually improved its failure rate, and to compute a lower confidence bound on the percentage of failure rate improvement affected by the change. This paper shows how such a bound can be computed based on certain test data. The main result of the paper is a special case of an equivalent result derived in Lehmann for hypothesis testing and used extensively in applied statistics. However, it is not well-known in its confidence interval form, nor is it extensively reported in reliability methods books, and its derivation is important in reliability testing.

  10. A variance based confidence criterion for ERA identified modal parameters. [Eigensystem Realization Algorithm

    NASA Technical Reports Server (NTRS)

    Longman, Richard W.; Juang, Jer-Nan

    1988-01-01

    The realization theory is developed in a systematic manner for the Eigensystem Realization Algorithm (ERA) used for system identification. First, perturbation results are obtained which describe the linearized changes in the identified parameters resulting from small change in the data. Formulas are then derived that can be used to evaluate the variance of each of the identified parameters, assuming that the noise level is sufficiently low to allow the application of linearized results. These variances can be converted to give confidence intervals for each of the parameters for any chosen confidence level.

  11. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  12. Children's Discrimination of Melodic Intervals.

    ERIC Educational Resources Information Center

    Schellenberg, E. Glenn; Trehub, Sandra E.

    1996-01-01

    Adults and children listened to tone sequences and were required to detect changes either from intervals with simple frequency ratios to intervals with complex ratios or vice versa. Adults performed better on changes from simple to complex ratios than on the reverse changes. Similar performance was observed for 6-year olds who had never taken…

  13. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  14. On the Confidence Limit of Hilbert Spectrum

    NASA Technical Reports Server (NTRS)

    Huang, Norden

    2003-01-01

    Confidence limit is a routine requirement for Fourier spectral analysis. But this confidence limit is established based on ergodic theory: For stationary process, temporal average equals the ensemble average. Therefore, one can divide the data into n-sections and treat each section as independent realization. Most natural processes in general, and climate data in particular, are not stationary; therefore, there is a need for the Hilbert Spectral analysis for such processes. Here ergodic theory is no longer applicable. We propose to use various adjustable parameters in the shifting processes of the Empirical Mode Decomposition (EMD) method to obtain an ensemble of Intrinsic Mode Function 0 sets. Based on such an ensemble, we introduce a statistical measure in. a form of confidence limits for the Intrinsic Mode Functions, and consequently, the Hilbert spectra. The criterion of selecting the various adjustable parameters is based on the orthogonality test of the resulting M F sets. Length-of-day data from 1962 to 2001 will be used to illustrate this new approach. Its implication in climate data analysis will also be discussed.

  15. Confidence-Based Learning in Investment Analysis

    NASA Astrophysics Data System (ADS)

    Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

    The aim of this study is to determine the effectiveness of using multiple choice tests in subjects related to the administration and business management. To this end we used a multiple-choice test with specific questions to verify the extent of knowledge gained and the confidence and trust in the answers. The tests were performed in a group of 200 students at the bachelor's degree in Business Administration and Management. The analysis made have been implemented in one subject of the scope of investment analysis and measured the level of knowledge gained and the degree of trust and security in the responses at two different times of the course. The measurements have been taken into account different levels of difficulty in the questions asked and the time spent by students to complete the test. The results confirm that students are generally able to obtain more knowledge along the way and get increases in the degree of trust and confidence in the answers. It is confirmed as the difficulty level of the questions set a priori by the heads of the subjects are related to levels of security and confidence in the answers. It is estimated that the improvement in the skills learned is viewed favourably by businesses and are especially important for job placement of students.

  16. Image magnification using interval information.

    PubMed

    Jurio, Aranzazu; Pagola, Miguel; Mesiar, Radko; Beliakov, Gleb; Bustince, Humberto

    2011-11-01

    In this paper, a simple and effective image-magnification algorithm based on intervals is proposed. A low-resolution image is magnified to form a high-resolution image using a block-expanding method. Our proposed method associates each pixel with an interval obtained by a weighted aggregation of the pixels in its neighborhood. From the interval and with a linear K(α) operator, we obtain the magnified image. Experimental results show that our algorithm provides a magnified image with better quality (peak signal-to-noise ratio) than several existing methods. PMID:21632304

  17. Engineering Student Self-Assessment through Confidence-Based Scoring

    ERIC Educational Resources Information Center

    Yuen-Reed, Gigi; Reed, Kyle B.

    2015-01-01

    A vital aspect of an answer is the confidence that goes along with it. Misstating the level of confidence one has in the answer can have devastating outcomes. However, confidence assessment is rarely emphasized during typical engineering education. The confidence-based scoring method described in this study encourages students to both think about…

  18. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  19. Simple Interval Timers for Microcomputers.

    ERIC Educational Resources Information Center

    McInerney, M.; Burgess, G.

    1985-01-01

    Discusses simple interval timers for microcomputers, including (1) the Jiffy clock; (2) CPU count timers; (3) screen count timers; (4) light pen timers; and (5) chip timers. Also examines some of the general characteristics of all types of timers. (JN)

  20. European security, nuclear weapons and public confidence

    SciTech Connect

    Gutteridge, W.

    1982-01-01

    This book presents papers on nuclear arms control in Europe. Topics considered include political aspects, the balance of power, nuclear disarmament in Europe, the implications of new conventional technologies, the neutron bomb, theater nuclear weapons, arms control in Northern Europe, naval confidence-building measures in the Baltic, the strategic balance in the Arctic Ocean, Arctic resources, threats to European stability, developments in South Africa, economic cooperation in Europe, European collaboration in science and technology after Helsinki, European cooperation in the area of electric power, and economic cooperation as a factor for the development of European security and cooperation.

  1. Confidence and conflicts of duty in surgery.

    PubMed

    Coggon, John; Wheeler, Robert

    2010-03-01

    This paper offers an exploration of the right to confidentiality, considering the moral importance of private information. It is shown that the legitimate value that individuals derive from confidentiality stems from the public interest. It is re-assuring, therefore, that public interest arguments must be made to justify breaches of confidentiality. The General Medical Council's guidance gives very high importance to duties to maintain confidences, but also rightly acknowledges that, at times, there are more important duties that must be met. Nevertheless, this potential conflict of obligations may place the surgeon in difficult clinical situations, and examples of these are described, together with suggestions for resolution. PMID:20353640

  2. Informing Decisions with Climate Information at Different Levels of Confidence

    NASA Astrophysics Data System (ADS)

    Lempert, R. J.; Kalra, N.

    2012-12-01

    As one important purpose, uncertainty quantification aims to provide information in a way that can usefully inform decisions. But many actual decisions may prove sensitive to information at different levels of confidence, which poses challenges for the uncertainty quantification task. For instance, some salient information may be well-represented by pdf's while other information may only be supported by a scattering of studies. This talk will demonstrate a decision analytic framework that can usefully employ information at different levels of confidence. The framework is based on the idea of identifying thresholds in various combinations of system properties that would suggest switching from one decision to another, and then gathering scientific evidence relevant to those thresholds that can help decision makers adjudicate their choices. The talk will demonstrate this approach with an example analysis that considers how the Port of Los Angeles might consider the potential for extreme sea level rise in its investment plans. This study uses a robust decision making (RDM) analysis to address two questions: (1) under what future conditions would a Port of Los Angeles decision to harden its facilities against extreme sea level rise at the next upgrade pass a cost-benefit test, and (2) does current science and other available information suggest such conditions are sufficiently likely to justify such an investment? To answer this second question, we use information expresses as a combination of probabilistic climate forecasts, interval probabilities, and non-probabilistic information. We find that a decision to harden at the next upgrade would merit serious consideration for only one of the four Port facilities considered and hardening costs would have to be 5 to 250 times smaller than current estimates to warrant consideration for the other three facilities. This study also compares and contrasts a robust decision making analysis with a full probabilistic analysis. This

  3. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines

    PubMed Central

    Gilkey, Melissa B.; McRee, Annie-Laurie; Magnus, Brooke E.; Reiter, Paul L.; Dempsey, Amanda F.; Brewer, Noel T.

    2016-01-01

    Objective To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents. Methods We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children’s vaccination status for vaccines including measles, mumps, and rubella (MMR), varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents’ mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status. Results A substantial minority of parents reported a history of vaccine refusal (15%) or delay (27%). Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54–0.63) as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76–0.86). Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40–1.68), varicella (OR = 1.54, 95% CI, 1.42–1.66), and flu vaccines (OR = 1.32, 95% CI, 1.23–1.42). Conclusions Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children. PMID:27391098

  4. Towards Measurement of Confidence in Safety Cases

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Paim Ganesh J.; Habli, Ibrahim

    2011-01-01

    Arguments in safety cases are predominantly qualitative. This is partly attributed to the lack of sufficient design and operational data necessary to measure the achievement of high-dependability targets, particularly for safety-critical functions implemented in software. The subjective nature of many forms of evidence, such as expert judgment and process maturity, also contributes to the overwhelming dependence on qualitative arguments. However, where data for quantitative measurements is systematically collected, quantitative arguments provide far more benefits over qualitative arguments, in assessing confidence in the safety case. In this paper, we propose a basis for developing and evaluating integrated qualitative and quantitative safety arguments based on the Goal Structuring Notation (GSN) and Bayesian Networks (BN). The approach we propose identifies structures within GSN-based arguments where uncertainties can be quantified. BN are then used to provide a means to reason about confidence in a probabilistic way. We illustrate our approach using a fragment of a safety case for an unmanned aerial system and conclude with some preliminary observations

  5. Diagnosing Anomalous Network Performance with Confidence

    SciTech Connect

    Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A; Poole, Stephen W

    2011-04-01

    Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

  6. Demonstrating disease freedom-combining confidence levels.

    PubMed

    Cannon, R M

    2002-01-22

    Part of the requirements for demonstrating disease freedom usually will be that sufficient testing be done to give a specified confidence of detecting the disease if it were present at a specified level. Often, this requirement is translated into a fixed testing regime that must be followed (an inflexible approach that might not be the most economic or practical solution).A more flexible approach is to specify the capabilities of the various tests that can be used to detect the disease, and let the party hoping to demonstrate disease freedom decide upon the testing regime. The question then arises as to how to combine information that can come from a variety of sources over a period of time to give an overall level of confidence. Two methods are given. The first, an exact method based on multiplying probabilities, would be more appropriate for a survey of an area in which no disease is thought to be present. The second method (more appropriate for a herd-assurance program within an infected area) is a point-based system that takes into account the different sensitivities of the methods used to detect disease and the change in prevalence over time. It allocates points for each test done proportional to the sensitivity of the test and the prevalence at the time of testing. PMID:11849719

  7. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  8. Confidence and rejection in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Colton, Larry Don

    Automatic speech recognition (ASR) is performed imperfectly by computers. For some designated part (e.g., word or phrase) of the ASR output, rejection is deciding (yes or no) whether it is correct, and confidence is the probability (0.0 to 1.0) of it being correct. This thesis presents new methods of rejecting errors and estimating confidence for telephone speech. These are also called word or utterance verification and can be used in wordspotting or voice-response systems. Open-set or out-of-vocabulary situations are a primary focus. Language models are not considered. In vocabulary-dependent rejection all words in the target vocabulary are known in advance and a strategy can be developed for confirming each word. A word-specific artificial neural network (ANN) is shown to discriminate well, and scores from such ANNs are shown on a closed-set recognition task to reorder the N-best hypothesis list (N=3) for improved recognition performance. Segment-based duration and perceptual linear prediction (PLP) features are shown to perform well for such ANNs. The majority of the thesis concerns vocabulary- and task-independent confidence and rejection based on phonetic word models. These can be computed for words even when no training examples of those words have been seen. New techniques are developed using phoneme ranks instead of probabilities in each frame. These are shown to perform as well as the best other methods examined despite the data reduction involved. Certain new weighted averaging schemes are studied but found to give no performance benefit. Hierarchical averaging is shown to improve performance significantly: frame scores combine to make segment (phoneme state) scores, which combine to make phoneme scores, which combine to make word scores. Use of intermediate syllable scores is shown to not affect performance. Normalizing frame scores by an average of the top probabilities in each frame is shown to improve performance significantly. Perplexity of the wrong

  9. Sample sizes for confidence limits for reliability.

    SciTech Connect

    Darby, John L.

    2010-02-01

    We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.

  10. High resolution time interval meter

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.

  11. The 2009 Retirement Confidence Survey: economy drives confidence to record lows; many looking to work longer.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2009-04-01

    RECORD LOW CONFIDENCE LEVELS: Workers who say they are very confident about having enough money for a comfortable retirement this year hit the lowest level in 2009 (13 percent) since the Retirement Confidence Survey started asking the question in 1993, continuing a two-year decline. Retirees also posted a new low in confidence about having a financially secure retirement, with only 20 percent now saying they are very confident (down from 41 percent in 2007). THE ECONOMY, INFLATION, COST OF LIVING ARE THE BIG CONCERNS: Not surprisingly, workers overall who have lost confidence over the past year about affording a comfortable retirement most often cite the recent economic uncertainty, inflation, and the cost of living as primary factors. In addition, certain negative experiences, such as job loss or a pay cut, loss of retirement savings, or an increase in debt, almost always contribute to loss of confidence among those who experience them. RETIREMENT EXPECTATIONS DELAYED: Workers apparently expect to work longer because of the economic downturn: 28 percent of workers in the 2009 RCS say the age at which they expect to retire has changed in the past year. Of those, the vast majority (89 percent) say that they have postponed retirement with the intention of increasing their financial security. Nevertheless, the median (mid-point) worker expects to retire at age 65, with 21 percent planning to push on into their 70s. The median retiree actually retired at age 62, and 47 percent of retirees say they retired sooner than planned. WORKING IN RETIREMENT: More workers are also planning to supplement their income in retirement by working for pay. The percentage of workers planning to work after they retire has increased to 72 percent in 2009 (up from 66 percent in 2007). This compares with 34 percent of retirees who report they actually worked for pay at some time during their retirement. GREATER WORRY ABOUT BASIC AND HEALTH EXPENSES: Workers who say they very confident in

  12. Updating representations of temporal intervals.

    PubMed

    Danckert, James; Anderson, Britt

    2015-12-01

    Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings. PMID:26303026

  13. Parameter Interval Estimation of System Reliability for Repairable Multistate Series-Parallel System with Fuzzy Data

    PubMed Central

    2014-01-01

    The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728

  14. Uniform Continuity on Unbounded Intervals

    ERIC Educational Resources Information Center

    Pouso, Rodrigo Lopez

    2008-01-01

    We present a teaching approach to uniform continuity on unbounded intervals which, hopefully, may help to meet the following pedagogical objectives: (i) To provide students with efficient and simple criteria to decide whether a continuous function is also uniformly continuous; and (ii) To provide students with skill to recognize graphically…

  15. Mathematical Foundations for a Theory of Confidence Structures

    PubMed Central

    Balch, Michael Scott

    2012-01-01

    This paper introduces a new mathematical object: the confidence structure. A confidence structure represents inferential uncertainty in an unknown parameter by defining a belief function whose output is commensurate with Neyman-Pearson confidence. Confidence structures on a group of input variables can be propagated through a function to obtain a valid confidence structure on the output of that function. The theory of confidence structures is created by enhancing the extant theory of confidence distributions with the mathematical generality of Dempster-Shafer evidence theory. Mathematical proofs grounded in random set theory demonstrate the operative properties of confidence structures. The result is a new theory which achieves the holistic goals of Bayesian inference while maintaining the empirical rigor of frequentist inference. PMID:25190904

  16. A confidence parameter for seismic moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2016-02-01

    Given a moment tensor m inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in m. The calculation of P(V) requires knowing both the probability hat{P}(ω ) and the fractional volume hat{V}(ω ) of the set of moment tensors within a given angular radius ω of m. We explain how to construct hat{P}(ω ) from a misfit function derived from seismic data, and we show how to calculate hat{V}(ω ), which depends on the set M of moment tensors under consideration. The two most important instances of M are where M is the set of all moment tensors of fixed norm, and where M is the set of all double couples of fixed norm.

  17. Germany and America: Crisis of confidence

    SciTech Connect

    Asmus, R.D.

    1991-02-01

    The paper examines the deterioration in German-American relations. The reasons for this downturn in German-American relations are quite simple. Washington views the Persian Gulf crisis as a defining moment in European-American relations and in the creation of a new world order. It is also the first diplomatic test of a unified Germany and a new German-American relationship. It is a test that Germany is thus far seen as having failed for three reasons. First, from the outset many Americans sensed that Germans did not comprehend what this crisis meant for the United States. A second and, in many ways, more worrying factor was the growing sense that the Germans were not being good Europeans. The third and most serious American concern, however, was the unsettling appearance of a very selective German definition of collective defense and common security. The result has been a crisis of confidence in the performance of the German political elite that goes beyond the problems in German-American relations during the early 1980s and the INF debate.

  18. Modal confidence factor in vibration testing

    NASA Technical Reports Server (NTRS)

    Ibrahim, S. R.

    1978-01-01

    The theory and applications of a time domain modal test technique are presented. The method uses free decay of random responses from a structure under test to identify its modal characteristics namely, natural frequencies, damping factors, and mode shapes. The method can identify multimodal (highly coupled) systems and modes that have very small contribution in the responses. A method is presented to decrease the effects of high levels of noise in the data and thus improve the accuracy of identified parameters. This is accomplished using an oversized mathematical model. The concept of modal confidence factor (MCF) is developed. The MCF is a number calculated for every identified mode for a structure under test. The MCF varies from 0.000 for a distorted, nonlinear, or noise mode to 100.0 for a pure structural mode. The theory of the MCF is based on the correlation that exits between the modal deflection at a certain station and the modal deflection at the same station delayed in time. The theory and application of the MCF is illustrated by two experiments. The first experiment deals with simulated responses from a two degree of freedom system with 20 percent, 40 percent, and 100 percent noise added. The second experiment was run on a generalized payload model. The free decay response from the payload model contained about 22 percent noise.

  19. Towards better counselling. Keeping confidences. Training activities.

    PubMed

    1996-01-01

    Presented are two training exercises for health personnel who counsel individuals about the results of blood tests for human immunodeficiency virus (HIV). The first exercise is preceded by remarks on the importance of trust and confidentiality in the clinical encounter. Then, participants are divided into pairs and instructed to think of a person they trust and to list 10 characteristics of that person. These attributes are compiled for the entire group. Next, small groups of 3-4 participants discuss the following questions: What do you need to say and do when you are counseling someone to help them have confidence in you? What do you need to do to enable them to keep trusting you? What might happen when confidentiality is broken? What are the benefits of maintaining confidentiality? Finally, the small groups are given case scenarios of breaches of client confidentiality and asked to imagine both how they would feel in such a situation and how it could have been prevented. The second exercise seeks to increase counselors' understanding of clients' risk-taking behaviors and their ability to suspend personal judgment by having them describe incidents from their own lives when they took a risk related to sex, relationships, or money. PMID:12291931

  20. A confidence parameter for seismic moment tensors

    NASA Astrophysics Data System (ADS)

    Tape, Walter; Tape, Carl

    2016-05-01

    Given a moment tensor m inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighbourhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in m. The calculation of P(V) requires knowing both the probability hat{P}(ω) and the fractional volume hat{V}(ω) of the set of moment tensors within a given angular radius ω of m. We explain how to construct hat{P}(ω) from a misfit function derived from seismic data, and we show how to calculate hat{V}(ω), which depends on the set M of moment tensors under consideration. The two most important instances of M are where M is the set of all moment tensors of fixed norm, and where M is the set of all double couples of fixed norm.

  1. Assessing Undergraduate Students' Conceptual Understanding and Confidence of Electromagnetics

    ERIC Educational Resources Information Center

    Leppavirta, Johanna

    2012-01-01

    The study examines how students' conceptual understanding changes from high confidence with incorrect conceptions to high confidence with correct conceptions when reasoning about electromagnetics. The Conceptual Survey of Electricity and Magnetism test is weighted with students' self-rated confidence on each item in order to infer how strongly…

  2. Girls and Women, Sport, and Self-Confidence.

    ERIC Educational Resources Information Center

    Lirgg, Cathy D.

    1992-01-01

    Analyzes research on females' self-confidence in sport and physical activity. The article compares three models that link confidence to achievement, examines research variables that may influence female self-confidence, discusses sex differences, and offers enhancement strategies and future research directions. (SM)

  3. Does Consumer Confidence Measure Up to the Hype?

    ERIC Educational Resources Information Center

    Griffitts, Dawn

    2003-01-01

    This economic education publication features an article, "Does Consumer Confidence Measure Up to the Hype?," which defines consumer confidence and describes how it is measured. The article also explores why people might pay so much attention to consumer confidence indexes. The document also contains a question and answer section about deflation as…

  4. Contrasting Academic Behavioural Confidence in Mexican and European Psychology Students

    ERIC Educational Resources Information Center

    Ochoa, Alma Rosa Aguila; Sander, Paul

    2012-01-01

    Introduction: Research with the Academic Behavioural Confidence scale using European students has shown that students have high levels of confidence in their academic abilities. It is generally accepted that people in more collectivist cultures have more realistic confidence levels in contrast to the overconfidence seen in individualistic European…

  5. Predicting Postfeedback Performance from Students' Confidence in Their Responses.

    ERIC Educational Resources Information Center

    Bender, Timothy A.

    The model of feedback processing proposed by R. W. Kulhavy and W. A. Stock (1989) was studied in a traditional classroom setting in which methods of assessing students' response confidence as predictors of postfeedback performance were also examined. The relationship between confidence ratings at the time of the test and confidence assessed prior…

  6. 49 CFR 1103.23 - Confidences of a client.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to...

  7. The 2012 Retirement Confidence Survey: job insecurity, debt weigh on retirement confidence, savings.

    PubMed

    Helman, Ruth; Copeland, Craig; VanDerhei, Jack

    2012-03-01

    Americans' confidence in their ability to retire comfortably is stagnant at historically low levels. Just 14 percent are very confident they will have enough money to live comfortably in retirement (statistically equivalent to the low of 13 percent measured in 2011 and 2009). Employment insecurity looms large: Forty-two percent identify job uncertainty as the most pressing financial issue facing most Americans today. Worker confidence about having enough money to pay for medical expenses and long-term care expenses in retirement remains well below their confidence levels for paying basic expenses. Many workers report they have virtually no savings and investments. In total, 60 percent of workers report that the total value of their household's savings and investments, excluding the value of their primary home and any defined benefit plans, is less than $25,000. Twenty-five percent of workers in the 2012 Retirement Confidence Survey say the age at which they expect to retire has changed in the past year. In 1991, 11 percent of workers said they expected to retire after age 65, and by 2012 that has grown to 37 percent. Regardless of those retirement age expectations, and consistent with prior RCS findings, half of current retirees surveyed say they left the work force unexpectedly due to health problems, disability, or changes at their employer, such as downsizing or closure. Those already in retirement tend to express higher levels of confidence than current workers about several key financial aspects of retirement. Retirees report they are significantly more reliant on Social Security as a major source of their retirement income than current workers expect to be. Although 56 percent of workers expect to receive benefits from a defined benefit plan in retirement, only 33 percent report that they and/or their spouse currently have such a benefit with a current or previous employer. More than half of workers (56 percent) report they and/or their spouse have not tried

  8. Happiness Scale Interval Study. Methodological Considerations.

    PubMed

    Kalmijn, W M; Arends, L R; Veenhoven, R

    2011-07-01

    as a beta distribution on the interval [0,10] with two shape parameters (α and β). From their estimates on the basis of the primary information, the mean value and the variance of the happiness distribution in the population can be estimated. An illustration is given in which the method is applied to existing measurement results of 20 surveys in The Netherlands in the period 1990-2008. The results clarify our recommendation to apply the model with a uniform distribution within each of the category intervals, in spite of a better validity of the alternative on the basis of a beta distribution. The reason is that the recommended model allows to construct a confidence interval for the true but unknown population happiness distribution. The paper ends with a listing of actual and potential merits of this approach, which has been described here for verbal happiness questions, but which is also applicable to phenomena which are measured along similar lines. PMID:21765582

  9. Influences of the Tamarisk Leaf Beetle (Diorhabda carinulata) on the diet of insectivorous birds along the Dolores River in Southwestern Colorado

    USGS Publications Warehouse

    Puckett, Sarah L.; van Riper, Charles, III

    2014-01-01

    We examined the effects of a biologic control agent, the tamarisk leaf beetle (Diorhabda carinulata), on native avifauna in southwestern Colorado, specifically, addressing whether and to what degree birds eat tamarisk leaf beetles. In 2010, we documented avian foraging behavior, characterized the arthropod community, sampled bird diets, and undertook an experiment to determine whether tamarisk leaf beetles are palatable to birds. We observed that tamarisk leaf beetles compose 24.0 percent (95-percent-confidence interval, 19.9-27.4 percent) and 35.4 percent (95-percent-confidence interval, 32.4-45.1 percent) of arthropod abundance and biomass in the study area, respectively. Birds ate few tamarisk leaf beetles, despite a superabundance of D. carinulata in the environment. The frequency of occurrence of tamarisk leaf beetles in bird diets was 2.1 percent (95-percent-confidence interval, 1.3- 2.9 percent) by abundance and 3.4 percent (95-percent-confidence interval, 2.6-4.2 percent) by biomass. Thus, tamarisk leaf beetles probably do not contribute significantly to the diets of birds in areas where biologic control of tamarisk is being applied.

  10. Standard Errors and Confidence Intervals from Bootstrapping for Ramsay-Curve Item Response Theory Model Item Parameters

    ERIC Educational Resources Information Center

    Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.

    2011-01-01

    Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…

  11. Factorial Based Response Surface Modeling with Confidence Intervals for Optimizing Thermal Optical Transmission Analysis of Atmospheric Black Carbon

    EPA Science Inventory

    We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...

  12. A Direct Method for Obtaining Approximate Standard Error and Confidence Interval of Maximal Reliability for Composites with Congeneric Measures

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2006-01-01

    Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…

  13. Confidence Intervals and "F" Tests for Intraclass Correlation Coefficients Based on Three-Way Mixed Effects Models

    ERIC Educational Resources Information Center

    Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.

    2011-01-01

    Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…

  14. Meta-analysis to refine map position and reduce confidence intervals for delayed canopy wilting QTLs in soybean

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Slow canopy wilting in soybean has been identified as a potentially beneficial trait for ameliorating drought effects on yield. Previous research identified QTLs for slow wilting from two different bi-parental populations and this information was combined with data from three other populations to id...

  15. Guide for Calculating and Interpreting Effect Sizes and Confidence Intervals in Intellectual and Developmental Disability Research Studies

    ERIC Educational Resources Information Center

    Dunst, Carl J.; Hamby, Deborah W.

    2012-01-01

    This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…

  16. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  17. On how the brain decodes vocal cues about speaker confidence.

    PubMed

    Jiang, Xiaoming; Pell, Marc D

    2015-05-01

    In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by

  18. Fourier Analysis of Musical Intervals

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    2008-11-01

    Use of a microphone attached to a computer to capture musical sounds and software to display their waveforms and harmonic spectra has become somewhat commonplace. A recent article in The Physics Teacher aptly demonstrated the use of MacScope2 in just such a manner as a way to teach Fourier analysis.3 A logical continuation of this project is to use MacScope not just to analyze the Fourier composition of musical tones but also musical intervals.

  19. Relating confidence to information uncertainty in qualitative reasoning

    SciTech Connect

    Chavez, Gregory M; Zerkle, David K; Key, Brian P; Shevitz, Daniel W

    2010-12-02

    Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing security risk results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to determine a qualitative measure of confidence in a qualitative reasoning result from the available information uncertainty in the result using membership values in the fuzzy sets of confidence. In this study information uncertainty is quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values. Measured values of information uncertainty in each result is used to obtain the confidence. The determined confidence values are used to compare competing scenarios and understand the influences on the desired result.

  20. Effects of postidentification feedback on eyewitness identification and nonidentification confidence.

    PubMed

    Semmler, Carolyn; Brewer, Neil; Wells, Gary L

    2004-04-01

    Two experiments investigated new dimensions of the effect of confirming feedback on eyewitness identification confidence using target-absent and target-present lineups and (previously unused) unbiased witness instructions (i.e., "offender not present" option highlighted). In Experiment 1, participants viewed a crime video and were later asked to try to identify the thief from an 8-person target-absent photo array. Feedback inflated witness confidence for both mistaken identifications and correct lineup rejections. With target-present lineups in Experiment 2, feedback inflated confidence for correct and mistaken identifications and lineup rejections. Although feedback had no influence on the confidence-accuracy correlation, it produced clear overconfidence. Confidence inflation varied with the confidence measure reference point (i.e., retrospective vs. current confidence) and identification response latency. PMID:15065979

  1. Confidence through consensus: a neural mechanism for uncertainty monitoring

    PubMed Central

    Paz, Luciano; Insabato, Andrea; Zylberberg, Ariel; Deco, Gustavo; Sigman, Mariano

    2016-01-01

    Models that integrate sensory evidence to a threshold can explain task accuracy, response times and confidence, yet it is still unclear how confidence is encoded in the brain. Classic models assume that confidence is encoded in some form of balance between the evidence integrated in favor and against the selected option. However, recent experiments that measure the sensory evidence’s influence on choice and confidence contradict these classic models. We propose that the decision is taken by many loosely coupled modules each of which represent a stochastic sample of the sensory evidence integral. Confidence is then encoded in the dispersion between modules. We show that our proposal can account for the well established relations between confidence, and stimuli discriminability and reaction times, as well as the fluctuations influence on choice and confidence. PMID:26907162

  2. An Event Restriction Interval Theory of Tense

    ERIC Educational Resources Information Center

    Beamer, Brandon Robert

    2012-01-01

    This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event…

  3. Interval Estimation of Standardized Mean Differences in Paired-Samples Designs

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2015-01-01

    Paired-samples designs are used frequently in educational and behavioral research. In applications where the response variable is quantitative, researchers are encouraged to supplement the results of a paired-samples t-test with a confidence interval (CI) for a mean difference or a standardized mean difference. Six CIs for standardized mean…

  4. Measuring Patterns of Surgeon Confidence Using a Novel Assessment Tool.

    PubMed

    Farrell, Timothy M; Ghaderi, Iman; McPhail, Lindsee E; Alger, Amy R; Meyers, Michael O; Meyer, Anthony A

    2016-01-01

    Confidence should increase during surgical training and practice. However, few data exist regarding confidence of surgeons across this continuum. Confidence may develop differently in clinical and personal domains, or may erode as specialization or age restricts practice. A reliable scale of confidence is needed to track this competency. A novel survey was distributed to surgeons in private and academic settings. One hundred and thirty-four respondents completed this cross-sectional survey. Surgeons reported anticipated reactions to clinical scenarios within three patient care domains (acute inpatient, nonacute inpatient, and outpatient) and in personal spheres. Confidence scores were plotted against years of experience. Curves of best fit were generated and trends assessed. A subgroup completed a second survey after four years to assess the survey's reliability over time. During residency, there is steep improvement in confidence reported by surgeons in all clinical domains, with further increase for inpatient domains during transition into practice. Confidence in personal spheres also increases quickly during residency and thereafter. The surgeon confidence scale captures the expected acquisition of confidence during early surgical experience, and will have value in following trends in surgeon confidence as training and practice patterns change. PMID:26802851

  5. Cancer mortality in workers exposed to 2,3,7,8-tetrachlorodibenzo-p-dioxin

    SciTech Connect

    Fingerhut, M.A.; Halperin, W.E.; Marlow, D.A.; Piacitelli, L.A.; Honchar, P.A.; Sweeney, M.H.; Greife, A.L.; Dill, P.A.; Steenland, K.; Suruda, A.J. )

    1991-01-24

    In both animal and epidemiologic studies, exposure to dioxin (2,3,7,8-tetrachlorodibenzo-p-dioxin, or TCDD) has been associated with an increased risk of cancer. We conducted a retrospective cohort study of mortality among the 5172 workers at 12 plants in the United States that produced chemicals contaminated with TCDD. Occupational exposure was documented by reviewing job descriptions and by measuring TCDD in serum from a sample of 253 workers. Causes of death were taken from death certificates. Mortality from several cancers previously associated with TCDD (stomach, liver, and nasal cancers, Hodgkin's disease, and non-Hodgkin's lymphoma) was not significantly elevated in this cohort. Mortality from soft-tissue sarcoma was increased, but not significantly (4 deaths; standardized mortality ratio (SMR), 338; 95 percent confidence interval, 92 to 865). In the subcohort of 1520 workers with greater than or equal to 1 year of exposure and greater than or equal to 20 years of latency, however, mortality was significantly increased for soft-tissue sarcoma (3 deaths; SMR, 922; 95 percent confidence interval, 190 to 2695) and for cancers of the respiratory system (SMR, 142; 95 percent confidence interval, 103 to 192). Mortality from all cancers combined was slightly but significantly elevated in the overall cohort (SMR, 115; 95 percent confidence interval, 102 to 130) and was higher in the subcohort with greater than or equal to 1 year of exposure and greater than or equal to 20 years of latency (SMR, 146; 95 percent confidence interval, 121 to 176). This study of mortality among workers with occupational exposure to TCDD does not confirm the high relative risks reported for many cancers in previous studies. Conclusions about an increase in the risk of soft-tissue sarcoma are limited by small numbers and misclassification on death certificates.

  6. Reflex Project: Using Model-Data Fusion to Characterize Confidence in Analyzes and Forecasts of Terrestrial C Dynamics

    NASA Astrophysics Data System (ADS)

    Fox, A. M.; Williams, M.; Richardson, A.; Cameron, D.; Gove, J. H.; Ricciuto, D. M.; Tomalleri, E.; Trudinger, C.; van Wijk, M.; Quaife, T.; Li, Z.

    2008-12-01

    The Regional Flux Estimation Experiment, REFLEX, is a model-data fusion inter-comparison project, aimed at comparing the strengths and weaknesses of various model-data fusion techniques for estimating carbon model parameters and predicting carbon fluxes and states. The key question addressed here is: what are the confidence intervals on (a) model parameters calibrated from eddy covariance (EC) and leaf area index (LAI) data and (b) on model analyses and predictions of net ecosystem C exchange (NEE) and carbon stocks? The experiment has an explicit focus on how different algorithms and protocols quantify the confidence intervals on parameter estimates and model forecasts, given the same model and data. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. Both observed daily NEE data from FluxNet sites and synthetic NEE data, generated by a model, were used to estimate the parameters and states of a simple C dynamics model. The results of the analyses supported the hypothesis that parameters linked to fast-response processes that mostly determine net ecosystem exchange of CO2 (NEE) were well constrained and well characterised. Parameters associated with turnover of wood and allocation to roots, only indirectly related to NEE, were poorly characterised. There was only weak agreement on estimations of uncertainty on NEE and its components, photosynthesis and ecosystem respiration, with some algorithms successfully locating the true values of these fluxes from synthetic experiments within relatively narrow 90% confidence intervals. This exercise has demonstrated that a range of techniques exist that can generate useful estimates of parameter probability density functions for C models from eddy covariance time series data. When these parameter PDFs are propagated to generate estimates of annual C fluxes there was a wide variation in size of the 90% confidence intervals. However, some algorithms were able to make

  7. High resolution time interval counter

    DOEpatents

    Condreva, Kenneth J.

    1994-01-01

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.

  8. High resolution time interval counter

    DOEpatents

    Condreva, K.J.

    1994-07-26

    A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.

  9. Variance misperception explains illusions of confidence in simple perceptual decisions.

    PubMed

    Zylberberg, Ariel; Roelfsema, Pieter R; Sigman, Mariano

    2014-07-01

    Confidence in a perceptual decision is a judgment about the quality of the sensory evidence. The quality of the evidence depends not only on its strength ('signal') but critically on its reliability ('noise'), but the separate contribution of these quantities to the formation of confidence judgments has not been investigated before in the context of perceptual decisions. We studied subjective confidence reports in a multi-element perceptual task where evidence strength and reliability could be manipulated independently. Our results reveal a confidence paradox: confidence is higher for stimuli of lower reliability that are associated with a lower accuracy. We show that the subjects' overconfidence in trials with unreliable evidence is caused by a reduced sensitivity to stimulus variability. Our results bridge between the investigation of miss-attributions of confidence in behavioral economics and the domain of simple perceptual decisions amenable to neuroscience research. PMID:24951943

  10. Confidence to cook vegetables and the buying habits of Australian households.

    PubMed

    Winkler, Elisabeth; Turrell, Gavin

    2009-10-01

    Cooking skills are emphasized in nutrition promotion but their distribution among population subgroups and relationship to dietary behavior is researched by few population-based studies. This study examined the relationships between confidence to cook, sociodemographic characteristics, and household vegetable purchasing. This cross-sectional study of 426 randomly selected households in Brisbane, Australia, used a validated questionnaire to assess household vegetable purchasing habits and the confidence to cook of the person who most often prepares food for these households. The mutually adjusted odds ratios (ORs) of lacking confidence to cook were assessed across a range of demographic subgroups using multiple logistic regression models. Similarly, mutually adjusted mean vegetable purchasing scores were calculated using multiple linear regression for different population groups and for respondents with varying confidence levels. Lacking confidence to cook using a variety of techniques was more common among respondents with less education (OR 3.30; 95% confidence interval [CI] 1.01 to 10.75) and was less common among respondents who lived with minors (OR 0.22; 95% CI 0.09 to 0.53) and other adults (OR 0.43; 95% CI 0.24 to 0.78). Lack of confidence to prepare vegetables was associated with being male (OR 2.25; 95% CI 1.24 to 4.08), low education (OR 6.60; 95% CI 2.08 to 20.91), lower household income (OR 2.98; 95% CI 1.02 to 8.72) and living with other adults (OR 0.53; 95% CI 0.29 to 0.98). Households bought a greater variety of vegetables on a regular basis when the main chef was confident to prepare them (difference: 18.60; 95% CI 14.66 to 22.54), older (difference: 8.69; 95% CI 4.92 to 12.47), lived with at least one other adult (difference: 5.47; 95% CI 2.82 to 8.12) or at least one minor (difference: 2.86; 95% CI 0.17 to 5.55). Cooking skills may contribute to socioeconomic dietary differences, and may be a useful strategy for promoting fruit and vegetable

  11. Relating confidence to measured information uncertainty in qualitative reasoning

    SciTech Connect

    Chavez, Gregory M; Zerkle, David K; Key, Brian P; Shevitz, Daniel W

    2010-10-07

    Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to relate confidence to the available information uncertainty in the result using fuzzy sets. Information uncertainty can be quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values.

  12. Cortical alpha activity predicts the confidence in an impending action

    PubMed Central

    Kubanek, Jan; Hill, N. Jeremy; Snyder, Lawrence H.; Schalk, Gerwin

    2015-01-01

    When we make a decision, we experience a degree of confidence that our choice may lead to a desirable outcome. Recent studies in animals have probed the subjective aspects of the choice confidence using confidence-reporting tasks. These studies showed that estimates of the choice confidence substantially modulate neural activity in multiple regions of the brain. Building on these findings, we investigated the neural representation of the confidence in a choice in humans who explicitly reported the confidence in their choice. Subjects performed a perceptual decision task in which they decided between choosing a button press or a saccade while we recorded EEG activity. Following each choice, subjects indicated whether they were sure or unsure about the choice. We found that alpha activity strongly encodes a subject's confidence level in a forthcoming button press choice. The neural effect of the subjects' confidence was independent of the reaction time and independent of the sensory input modeled as a decision variable. Furthermore, the effect is not due to a general cognitive state, such as reward expectation, because the effect was specifically observed during button press choices and not during saccade choices. The neural effect of the confidence in the ensuing button press choice was strong enough that we could predict, from independent single trial neural signals, whether a subject was going to be sure or unsure of an ensuing button press choice. In sum, alpha activity in human cortex provides a window into the commitment to make a hand movement. PMID:26283892

  13. The antecedents and belief-polarized effects of thought confidence.

    PubMed

    Chou, Hsuan-Yi; Lien, Nai-Hwa; Liang, Kuan-Yu

    2011-01-01

    This article investigates 2 possible antecedents of thought confidence and explores the effects of confidence induced before or during ad exposure. The results of the experiments indicate that both consumers' dispositional optimism and spokesperson attractiveness have significant effects on consumers' confidence in thoughts that are generated after viewing the advertisement. Higher levels of thought confidence will influence the quality of the thoughts that people generate, lead to either positively or negatively polarized message processing, and therefore induce better or worse advertising effectiveness, depending on the valence of thoughts. The authors posit the belief-polarization hypothesis to explain these findings. PMID:21902013

  14. Orders on Intervals Over Partially Ordered Sets: Extending Allen's Algebra and Interval Graph Results

    SciTech Connect

    Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A.

    2013-08-01

    To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets.

  15. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  16. The use of simultaneous confidence bands for comparison of single parameter fluorescent intensity data.

    PubMed

    Kim, Dongha; Donnenberg, Vera S; Wilson, John W; Donnenberg, Albert D

    2016-01-01

    Despite the utility of multiparameter flow cytometry for a wide variety of biological applications, comparing single parameter histograms of fluorescence intensity remains a mainstay of flow cytometric analysis. Even comparisons requiring multiparameter gating strategies often end with single parameter histograms as the final readout. When histograms overlap, analysis relies on comparison of mean or median fluorescence intensities, or determination of percent positive based on an arbitrary cutoff. Earlier attempts to address this problem utilized either simple channel-by-channel subtraction without statistical evaluation, or the Kolmogorov-Smirnov (KS) or Chi-square test statistics, both of which proved to be overly sensitive to small and biologically insignificant differences. Here we present a method for the comparison of two single-parameter histograms based on difference curves and their simultaneous confidence bands generated by bootstrapping raw channel data. Bootstrapping is a nonparametric statistical approach that can be used to generate confidence intervals without distributional assumptions about the data. We have constructed simultaneous confidence bands and show them to be superior to KS and Cox methods. The method constructs 95% confidence bands about the difference curves, provides a P value for the comparison and calculates the area under the difference curve (AUC) as an estimate of percent positive and the area under the confidence band (AUCSCB95), providing a lower estimate of the percent positive. To demonstrate the utility of this new approach we have examined single-color fluorescence intensity data taken from a cell surface proteomic survey of a lung cancer cell line (A549) and a published fluorescence intensity data from a rhodamine efflux assay of P-glycoprotein activity, comparing rhodamine 123 loading and efflux in CD4 and CD8 T-cell populations. SAS source code is provided as supplementary material. PMID:26407241

  17. Microsatellite Instability Status of Interval Colorectal Cancers in a Korean Population

    PubMed Central

    Lee, Kil Woo; Park, Soo-Kyung; Yang, Hyo-Joon; Jung, Yoon Suk; Choi, Kyu Yong; Kim, Kyung Eun; Jung, Kyung Uk; Kim, Hyung Ook; Kim, Hungdai; Chun, Ho-Kyung; Park, Dong Il

    2016-01-01

    Background/Aims A subset of patients may develop colorectal cancer after a colonoscopy that is negative for malignancy. These missed or de novo lesions are referred to as interval cancers. The aim of this study was to determine whether interval colon cancers are more likely to result from the loss of function of mismatch repair genes than sporadic cancers and to demonstrate microsatellite instability (MSI). Methods Interval cancer was defined as a cancer that was diagnosed within 5 years of a negative colonoscopy. Among the patients who underwent an operation for colorectal cancer from January 2013 to December 2014, archived cancer specimens were evaluated for MSI by sequencing microsatellite loci. Results Of the 286 colon cancers diagnosed during the study period, 25 (8.7%) represented interval cancer. MSI was found in eight of the 25 patients (32%) that presented interval cancers compared with 22 of the 261 patients (8.4%) that presented sporadic cancers (p=0.002). In the multivariable logistic regression model, MSI was associated with interval cancer (OR, 3.91; 95% confidence interval, 1.38 to 11.05). Conclusions Interval cancers were approximately four times more likely to show high MSI than sporadic cancers. Our findings indicate that certain interval cancers may occur because of distinct biological features. PMID:27114419

  18. True and false memories, parietal cortex, and confidence judgments.

    PubMed

    Urgolites, Zhisen J; Smith, Christine N; Squire, Larry R

    2015-11-01

    Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory). Accordingly, it has often been difficult to know whether a finding is related to memory confidence or memory accuracy. In the current study, participants made recognition memory judgments with confidence ratings in response to previously studied scenes and novel scenes. The left hippocampus and 16 other brain regions distinguished true and false memories when confidence ratings were different for the two conditions. Only three regions (all in the parietal cortex) distinguished true and false memories when confidence ratings were equated. These findings illustrate the utility of taking confidence ratings into account when identifying brain regions associated with true and false memories. Neural correlates of true and false memories are most easily interpreted when confidence ratings are similar for the two kinds of memories. PMID:26472645

  19. Producing "Confident" Children: Negotiating Childhood in Fijian Kindergartens

    ERIC Educational Resources Information Center

    Brison, Karen J.

    2011-01-01

    Kindergartens in Fiji contribute to incipient class-based identities in a society traditionally structured by ethnicity. Teachers emphasize making children confident, but define confidence differently with varying student groups, building class-based orientations toward person and society. Parental expectations also differ with many upwardly…

  20. The Confident Learner: Help Your Child Succeed in School.

    ERIC Educational Resources Information Center

    Simic, Marjorie R.; And Others

    This book is intended to assist parents in helping their children become confident learners and self-reliant individuals who succeed in school. The book maintains that children become confident learners by developing high self-esteem, strong motivation, self-discipline, good health and fitness, and the ability to deal with stress. Following an…

  1. True and False Memories, Parietal Cortex, and Confidence Judgments

    ERIC Educational Resources Information Center

    Urgolites, Zhisen J.; Smith, Christine N.; Squire, Larry R.

    2015-01-01

    Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory).…

  2. Utilitarian Model of Measuring Confidence within Knowledge-Based Societies

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Hung, Kuan-Ming; Liu, Chia Ju; Chiu, Houn Lin

    2009-01-01

    This paper introduces a utilitarian confidence testing statistic called Risk Inclination Model (RIM) which indexes all possible confidence wagering combinations within the confines of a defined symmetrically point-balanced test environment. This paper presents the theoretical underpinnings, a formal derivation, a hypothetical application, and…

  3. Influence of Achievement Beliefs on Adolescent Girls' Sport Confidence Sources.

    ERIC Educational Resources Information Center

    Magyar, T. Michelle; Feltz, Deborah L.

    A study was conducted on the influence of female athletes' dispositional and situational tendencies on the selection of sources of sport confidence. It hypothesized that task orientation and perceptions of mastery climate would be positively associated with the selection of maladaptive or normative sources of confidence. Participants were 180…

  4. Information and Communication: Tools for Increasing Confidence in the Schools.

    ERIC Educational Resources Information Center

    Achilles, C. M.; Lintz, M. N.

    Beginning with a review of signs and signals of public attitudes toward American education over the last 15 years, this paper analyzes some concerns regarding public confidence in public schools. Following a brief introduction, issues involved in the definition and behavioral attributes of confidence are mentioned. A synopsis of three approaches…

  5. Confidence and Gender Differences on the Mental Rotations Test

    ERIC Educational Resources Information Center

    Cooke-Simpson, Amanda; Voyer, Daniel

    2007-01-01

    The present study examined the relation between self-reported confidence ratings, performance on the Mental Rotations Test (MRT), and guessing behavior on the MRT. Eighty undergraduate students (40 males, 40 females) completed the MRT while rating their confidence in the accuracy of their answers for each item. As expected, gender differences in…

  6. Subjective Confidence in One's Answers: The Consensuality Principle

    ERIC Educational Resources Information Center

    Koriat, Asher

    2008-01-01

    In answering general-information questions, a within-person confidence-accuracy (C-A) correlation is typically observed, suggesting that people can monitor the correctness of their knowledge. However, because the correct answer is generally the consensual answer--the one endorsed by most participants--confidence judgment may actually monitor the…

  7. The Self-Consistency Model of Subjective Confidence

    ERIC Educational Resources Information Center

    Koriat, Asher

    2012-01-01

    How do people monitor the correctness of their answers? A self-consistency model is proposed for the process underlying confidence judgments and their accuracy. In answering a 2-alternative question, participants are assumed to retrieve a sample of representations of the question and base their confidence on the consistency with which the chosen…

  8. A Rasch Analysis of the Teachers Music Confidence Scale

    ERIC Educational Resources Information Center

    Yim, Hoi Yin Bonnie; Abd-El-Fattah, Sabry; Lee, Lai Wan Maria

    2007-01-01

    This article presents a new measure of teachers' confidence to conduct musical activities with young children; Teachers Music Confidence Scale (TMCS). The TMCS was developed using a sample of 284 in-service and pre-service early childhood teachers in Hong Kong Special Administrative Region (HKSAR). The TMCS consisted of 10 musical activities.…

  9. Confidence set interference with a prior quadratic bound. [in geophysics

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1989-01-01

    Neyman's (1937) theory of confidence sets is developed as a replacement for Bayesian interference (BI) and stochastic inversion (SI) when the prior information is a hard quadratic bound. It is recommended that BI and SI be replaced by confidence set interference (CSI) only in certain circumstances. The geomagnetic problem is used to illustrate the general theory of CSI.

  10. Supplementary Eye Field Encodes Confidence in Decisions Under Risk.

    PubMed

    So, NaYoung; Stuphorn, Veit

    2016-02-01

    Choices are made with varying degrees of confidence, a cognitive signal representing the subjective belief in the optimality of the choice. Confidence has been mostly studied in the context of perceptual judgments, in which choice accuracy can be measured using objective criteria. Here, we study confidence in subjective value-based decisions. We recorded in the supplementary eye field (SEF) of monkeys performing a gambling task, where they had to use subjective criteria for placing bets. We found neural signals in the SEF that explicitly represent choice confidence independent from reward expectation. This confidence signal appeared after the choice and diminished before the choice outcome. Most of this neuronal activity was negatively correlated with confidence, and was strongest in trials on which the monkey spontaneously withdrew his choice. Such confidence-related activity indicates that the SEF not only guides saccade selection, but also evaluates the likelihood that the choice was optimal. This internal evaluation influences decisions concerning the willingness to bear later costs that follow from the choice or to avoid them. More generally, our findings indicate that choice confidence is an integral component of all forms of decision-making, whether they are based on perceptual evidence or on value estimations. PMID:25750256

  11. The Metamemory Approach to Confidence: A Test Using Semantic Memory

    ERIC Educational Resources Information Center

    Brewer, William F.; Sampaio, Cristina

    2012-01-01

    The metamemory approach to memory confidence was extended and elaborated to deal with semantic memory tasks. The metamemory approach assumes that memory confidence is based on the products and processes of a completed memory task, as well as metamemory beliefs that individuals have about how their memory products and processes relate to memory…

  12. Recognition confidence under violated and confirmed memory expectations

    PubMed Central

    Jaeger, Antonio; Cox, Justin C.; Dobbins, Ian G.

    2011-01-01

    Our memory experiences typically covary with those of the others’ around us, and on average, an item is more likely to be familiar than not, if a companion recommends it as such. Although it would be ideal if observers could use the external recommendations of others as statistical priors during recognition decisions, it is currently unclear how or if they do so. Furthermore, understanding the sensitivity of recognition judgments to such external cues is critical for understanding memory conformity and eyewitness suggestibility phenomena. To address this we examined recognition accuracy and confidence following cues from an external source (e.g., “Likely old”) that forecast the likely status of upcoming memory probes. Three regularities emerged. First, hit and correction rejection rates expectedly fell when subjects were invalidly versus validly cued. Second, hit confidence was generally higher than correct rejection confidence, regardless of cue validity. Finally, and most noteworthy, cue validity interacted with judgment confidence such that validity heavily influenced the confidence of correct rejections, but had no discernable influence on the confidence of hits. Bootstrap informed Monte Carlo simulation supported a dual process recognition model under which familiarity and recollection processes counteract to heavily dampen the influence of external cues on average reported confidence. A third experiment tested this model using source memory. As predicted, because source memory is heavily governed by contextual recollection, cue validity again did not affect confidence, although as with recognition, it clearly altered accuracy. PMID:21967231

  13. Feedback Dependence Among Low Confidence Preadolescent Boys and Girls.

    ERIC Educational Resources Information Center

    Stewart, Michael J.; Corbin, Charles B.

    1988-01-01

    Investigation of differences between male and female students' reactions to receiving or not receiving performance feedback indicated that both sexes showed lower self-confidence when they did not receive feedback and that lack of self-confidence impaired the performance of males more than females. Participants were 111 fifth- and sixth-grade…

  14. An Application of the Poisson Race Model to Confidence Calibration

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Van Zandt, Trisha

    2006-01-01

    In tasks as diverse as stock market predictions and jury deliberations, a person's feelings of confidence in the appropriateness of different choices often impact that person's final choice. The current study examines the mathematical modeling of confidence calibration in a simple dual-choice task. Experiments are motivated by an accumulator…

  15. Music Education Preservice Teachers' Confidence in Resolving Behavior Problems

    ERIC Educational Resources Information Center

    Hedden, Debra G.

    2015-01-01

    The purpose of this study was to investigate whether there would be a change in preservice teachers' (a) confidence concerning the resolution of behavior problems, (b) tactics for resolving them, (c) anticipation of problems, (d) fears about management issues, and (e) confidence in methodology and pedagogy over the time period of a one-semester…

  16. Confidence Sharing in the Vocational Counselling Interview: Emergence and Repercussions

    ERIC Educational Resources Information Center

    Olry-Louis, Isabelle; Bremond, Capucine; Pouliot, Manon

    2012-01-01

    Confidence sharing is an asymmetrical dialogic episode to which both parties consent, in which one reveals something personal to the other who participates in the emergence and unfolding of the confidence. We describe how this is achieved at a discursive level within vocational counselling interviews. Based on a corpus of 64 interviews, we analyse…

  17. Understanding public confidence in government to prevent terrorist attacks.

    SciTech Connect

    Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

    2008-04-02

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

  18. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

    ERIC Educational Resources Information Center

    Jin, Tan; Mak, Barley; Zhou, Pei

    2012-01-01

    The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

  19. RIASEC Interest and Confidence Cutoff Scores: Implications for Career Counseling

    ERIC Educational Resources Information Center

    Bonitz, Verena S.; Armstrong, Patrick Ian; Larson, Lisa M.

    2010-01-01

    One strategy commonly used to simplify the joint interpretation of interest and confidence inventories is the use of cutoff scores to classify individuals dichotomously as having high or low levels of confidence and interest, respectively. The present study examined the adequacy of cutoff scores currently recommended for the joint interpretation…

  20. Prospective Teachers' Problem Solving Skills and Self-Confidence Levels

    ERIC Educational Resources Information Center

    Gursen Otacioglu, Sena

    2008-01-01

    The basic objective of the research is to determine whether the education that prospective teachers in different fields receive is related to their levels of problem solving skills and self-confidence. Within the mentioned framework, the prospective teachers' problem solving and self-confidence levels have been examined under several variables.…

  1. Self-Confidence in Women's Education: A Feminist Critique.

    ERIC Educational Resources Information Center

    Davis, Fran; Steiger, Arlene

    While acknowledging the research that suggests that women approach their education with lower levels of self-confidence than men, this paper raises fundamental questions about how self-confidence has been described and measured during the last two decades. The validity of work on women's attitudes toward academic success is shown to be undercut by…

  2. Development of Confidence in Child Behavior Management through Role Playing.

    ERIC Educational Resources Information Center

    Kress, Gerard C., Jr.; Ehrlichs, Melvin A.

    1990-01-01

    In a preclinical course in pediatric dentistry, 76 students were taught child behavior management through role playing of 7-10 common management situations. Pre- and postcourse measures of student confidence found that, although older students were more confident, all gained significantly from the training. Other student characteristics were also…

  3. A (revised) confidence index for the forecasting of meteor showers

    NASA Astrophysics Data System (ADS)

    Vaubaillon, J.

    2016-01-01

    A confidence index for the forecasting of meteor showers is presented. The goal is to provide users with information regarding the way the forecasting is performed, so several degrees of confidence is achieved. This paper presents the meaning of the index coding system.

  4. Automatic integration of confidence in the brain valuation signal.

    PubMed

    Lebreton, Maël; Abitbol, Raphaëlle; Daunizeau, Jean; Pessiglione, Mathias

    2015-08-01

    A key process in decision-making is estimating the value of possible outcomes. Growing evidence suggests that different types of values are automatically encoded in the ventromedial prefrontal cortex (VMPFC). Here we extend this idea by suggesting that any overt judgment is accompanied by a second-order valuation (a confidence estimate), which is also automatically incorporated in VMPFC activity. In accordance with the predictions of our normative model of rating tasks, two behavioral experiments showed that confidence levels were quadratically related to first-order judgments (age, value or probability ratings). The analysis of three functional magnetic resonance imaging data sets using similar rating tasks confirmed that the quadratic extension of first-order ratings (our proxy for confidence) was encoded in VMPFC activity, even if no confidence judgment was required of the participants. Such an automatic aggregation of value and confidence in a same brain region might provide insight into many distortions of judgment and choice. PMID:26192748

  5. Can nursing students' confidence levels increase with repeated simulation activities?

    PubMed

    Cummings, Cynthia L; Connelly, Linda K

    2016-01-01

    In 2014, nursing faculty conducted a study with undergraduate nursing students on their satisfaction, confidence, and educational practice levels, as it related to simulation activities throughout the curriculum. The study was a voluntary survey conducted on junior and senior year nursing students. It consisted of 30 items based on the Student Satisfaction and Self-Confidence in Learning and the Educational Practices Questionnaire (Jeffries, 2012). Mean averages were obtained for each of the 30 items from both groups and were compared using T scores for unpaired means. The results showed that 8 of the items had a 95% confidence level and when combined the items were significant for p <.001. The items identified were those related to self-confidence and active learning. Based on these findings, it can be assumed that repeated simulation experiences can lead to an increase in student confidence and active learning. PMID:26599594

  6. Pre-competitive confidence, coping, and subjective performance in sport.

    PubMed

    Levy, A R; Nicholls, A R; Polman, R C J

    2011-10-01

    The primary aim of this study was to investigate the relationship between confidence and subjective performance in addition to exploring whether coping mediated this relationship. A sample of 414 athletes completed a measure of confidence before performance. Athletes also completed a measure of coping and subjective performance after competing. Correlational findings revealed that confidence was positively and significantly associated with subjective performance. Furthermore, mediational analysis found that coping partly mediated this relationship. In particular, task-oriented coping (i.e., mental imagery) and disengagement-oriented coping (i.e., resignation) had positive and negative mediational effects, respectively. Additionally, athletes who employed mental imagery generally coped more effectively than those using resignation. These findings imply mental imagery has the potential not only to improve confidence, but also subsequent performance, while resignation coping may have the opposite effect. Overall, these results lend some credence to Vealey's integrated sports confidence model. PMID:20459476

  7. Min and Max Extreme Interval Values

    ERIC Educational Resources Information Center

    Jance, Marsha L.; Thomopoulos, Nick T.

    2011-01-01

    The paper shows how to find the min and max extreme interval values for the exponential and triangular distributions from the min and max uniform extreme interval values. Tables are provided to show the min and max extreme interval values for the uniform, exponential, and triangular distributions for different probabilities and observation sizes.

  8. Familiarity-Frequency Ratings of Melodic Intervals

    ERIC Educational Resources Information Center

    Jeffries, Thomas B.

    1972-01-01

    Objective of this study was to determine subjects' reliability in rating randomly played ascending and descending melodic intervals within the octave on the basis of their familiarity with each type of interval and the frequency of their having experienced each type of interval in music. (Author/CB)

  9. Playing with confidence: the relationship between imagery use and self-confidence and self-efficacy in youth soccer players.

    PubMed

    Munroe-Chandler, Krista; Hall, Craig; Fishburne, Graham

    2008-12-01

    Confidence has been one of the most consistent factors in distinguishing the successful from the unsuccessful athletes (Gould, Weiss, & Weinberg, 1981) and Bandura (1997) proposed that imagery is one way to enhance confidence. Therefore, the purpose of the present study was to examine the relationship between imagery use and confidence in soccer (football) players. The participants included 122 male and female soccer athletes ages 11-14 years participating in both house/ recreation (n = 72) and travel/competitive (n = 50) levels. Athletes completed three questionnaires; one measuring the frequency of imagery use, one assessing generalised self-confidence, and one assessing self-efficacy in soccer. A series of regression analyses found that Motivational General-Mastery (MG-M) imagery was a signifant predictor of self-confidence and self-efficacy in both recreational and competitive youth soccer players. More specifically, MG-M imagery accounted for between 40 and 57% of the variance for both self-confidence and self-efficacy with two other functions (MG-A and MS) contributing marginally in the self-confidence regression for recreational athletes. These findings suggest that if a youth athlete, regardless of competitive level, wants to increase his/her self-confidence or self-efficacy through the use of imagery, the MG-M function should be emphasised. PMID:18949659

  10. Confidence-accuracy resolution in the misinformation paradigm is influenced by the availability of source cues.

    PubMed

    Horry, Ruth; Colton, Lisa-Marie; Williamson, Paul

    2014-09-01

    After witnessing an event, people often report having seen details that were merely suggested to them. Evidence is mixed regarding how well participants can use confidence judgments to discriminate between their correct and misled memory reports. We tested the prediction that the confidence-accuracy relationship for misled details depends upon the availability of source cues at retrieval. In Experiment 1, participants (N=77) viewed a videotaped staged crime before reading a misleading narrative. After seven minutes or one week, the participants completed a cued recall test for the details of the original event. Prior to completing the test, all participants were warned that the narrative contained misleading details to encourage source monitoring. The results showed that the strength of the confidence-accuracy relationship declined significantly over the delay. We interpret our results in the source monitoring framework. After an extended delay, fewer diagnostic source details were available to participants, increasing reliance on retrieval fluency as a basis for memory and metamemory decisions. We tested this interpretation in a second experiment, in which participants (N=42) completed a source monitoring test instead of a cued recall test. We observed a large effect of retention interval on source monitoring, and no significant effect on item memory. This research emphasizes the importance of securing eyewitness statements as soon as possible after an event, when witnesses are most able to discriminate between information that was personally seen and information obtained from secondary sources. PMID:24983514

  11. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    PubMed Central

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related.

  12. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

    NASA Astrophysics Data System (ADS)

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-09-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

  13. Neural correlates of perceived confidence in a partial report paradigm.

    PubMed

    Graziano, Martín; Parra, Lucas C; Sigman, Mariano

    2015-06-01

    Confidence judgments are often severely distorted: People may feel underconfident when responding correctly or, conversely, overconfident in erred responses. Our aim here was to identify the timing of brain processes that lead to variations in objective performance and subjective judgments of confidence. We capitalized on the Partial Report Paradigm [Sperling, G. The information available in brief visual presentations. Psychological Monographs: General and Applied, 74, 1, 1960], which allowed us to separate experimentally the moment of encoding of information from that of its retrieval [Zylberberg, A., Dehaene, S., Mindlin, G. B., & Sigman, M. Neurophysiological bases of exponential sensory decay and top-down memory retrieval: A model. Frontiers in Computational Neuroscience, 3, 2009]. We observed that the level of subjective confidence is indexed by two very specific evoked potentials at latencies of about 400 and 600 msec during the retrieval stage and by a stationary measure of intensity of the alpha band during the encoding period. When factoring out the effect of confidence, objective performance shows a weak effect during the encoding and retrieval periods. These results have relevant implications for theories of decision-making and confidence, suggesting that confidence is not constructed online as evidence is accumulated toward a decision. Instead, confidence attributions are more consistent with a retrospective mechanism that monitors the entire decision process. PMID:25390193

  14. Public eyewitness confidence ratings can differ from those held privately.

    PubMed

    Shaw, J S; Zerr, T K; Woythaler, K A

    2001-04-01

    Despite much research on eyewitness confidence, we know very little about whether confidence ratings given in public might differ from those held privately. This study tested a prediction derived from self-presentation theory that eyewitnesses will give lower confidence ratings in public when there is a possibility of their account being contradicted by other witnesses as compared to when they report their confidence in private. In groups of 3 or 4 people, 96 participants watched a videotape of a simulated robbery and then answered 16 forced-choice questions about details from the videotape. In half of the experimental sessions, the participants shared their answers and confidence ratings aloud with the other participants (public condition), and in the other half, the answers and ratings were not shared (private). As predicted, confidence ratings were significantly lower in the public condition than in the private condition, but the privacy manipulation had no effect on response accuracy. These results are consistent with a self-presentation explanation, and they highlight the need to examine public confidence ratings more thoroughly. PMID:11419379

  15. Decision-related cortical potentials during an auditory signal detection task with cued observation intervals

    NASA Technical Reports Server (NTRS)

    Squires, K. C.; Squires, N. K.; Hillyard, S. A.

    1975-01-01

    Cortical-evoked potentials were recorded from human subjects performing an auditory detection task with confidence rating responses. Unlike earlier studies that used similar procedures, the observation interval during which the auditory signal could occur was clearly marked by a visual cue light. By precisely defining the observation interval and, hence, synchronizing all perceptual decisions to the evoked potential averaging epoch, it was possible to demonstrate that high-confidence false alarms are accompanied by late-positive P3 components equivalent to those for equally confident hits. Moreover the hit and false alarm evoked potentials were found to covary similarly with variations in confidence rating and to have similar amplitude distributions over the scalp. In a second experiment, it was demonstrated that correct rejections can be associated with a P3 component larger than that for hits. Thus it was possible to show, within the signal detection paradigm, how the two major factors of decision confidence and expectancy are reflected in the P3 component of the cortical-evoked potential.

  16. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  17. Validation, Uncertainty, and Quantitative Reliability at Confidence (QRC)

    SciTech Connect

    Logan, R W; Nitta, C K

    2002-12-06

    This paper represents a summary of our methodology for Verification and Validation and Uncertainty Quantification. A graded scale methodology is presented and related to other concepts in the literature. We describe the critical nature of quantified Verification and Validation with Uncertainty Quantification at specified Confidence levels in evaluating system certification status. Only after Verification and Validation has contributed to Uncertainty Quantification at specified confidence can rational tradeoffs of various scenarios be made. Verification and Validation methods for various scenarios and issues are applied in assessments of Quantified Reliability at Confidence and we summarize briefly how this can lead to a Value Engineering methodology for investment strategy.

  18. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  19. Erratum: Action-Specific Disruption of Perceptual Confidence

    PubMed Central

    2015-01-01

    Fleming, S. M., Maniscalco, B., Ko, Y., Amendi, N., Ro, T., & Lau, H. (2015). Action-specific disruption of perceptual confidence. Psychological Science, 26, 89–98. (Original DOI: 10.1177/0956797614557697) PMID:25814500

  20. Confidence and the stock market: an agent-based approach.

    PubMed

    Bertella, Mario A; Pires, Felipe R; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations--indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior. PMID:24421888

  1. Confidence and the Stock Market: An Agent-Based Approach

    PubMed Central

    Bertella, Mario A.; Pires, Felipe R.; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations—indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior. PMID:24421888

  2. 78 FR 56621 - Draft Waste Confidence Generic Environmental Impact Statement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... Impact Statement AGENCY: Nuclear Regulatory Commission. ACTION: Draft generic environmental impact... issuing for public comment the draft generic environmental impact statement (DGEIS), NUREG-2157, ``Waste Confidence Generic Environmental Impact Statement,'' that forms the regulatory basis for the...

  3. The Sense of Confidence during Probabilistic Learning: A Normative Account

    PubMed Central

    Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas

    2015-01-01

    Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core

  4. Improved reproducibility by assuring confidence in measurements in biomedical research.

    PubMed

    Plant, Anne L; Locascio, Laurie E; May, Willie E; Gallagher, Patrick D

    2014-09-01

    ‘Irreproducibility’ is symptomatic of a broader challenge in measurement in biomedical research. From the US National Institute of Standards and Technology (NIST) perspective of rigorous metrology, reproducibility is only one aspect of establishing confidence in measurements. Appropriate controls, reference materials, statistics and informatics are required for a robust measurement process. Research is required to establish these tools for biological measurements, which will lead to greater confidence in research results. PMID:25166868

  5. The self-assessment of confidence, by one vocational trainee

    PubMed Central

    Leonard, Colin

    1979-01-01

    A list of important topics in general practice was constructed and a trainee was asked to indicate his confidence about each topic on a one to five scale. Repeated use showed different confidence ratings by the same trainee, and an attempt was made to correlate factual knowledge by using a multiple choice questionnaire. Despite important limitations, which are described, this method may be useful in identifying suitable topics for teaching during the trainee year. PMID:541789

  6. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    ERIC Educational Resources Information Center

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  7. Practical Scheffe-type credibility intervals for variables of a groundwater model

    USGS Publications Warehouse

    Cooley, R.L.

    1999-01-01

    Simultaneous Scheffe-type credibility intervals (the Bayesian version of confidence intervals) for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were derived by Cooley [1993b]. It was assumed that variances reflecting the expected differences between observed and model-computed quantities used to calibrate the model are known, whereas they would often be unknown for an actual model. In this study the variances are regarded as unknown, and variance variability from observation to observation is approximated by grouping the data so that each group is characterized by a uniform variance. The credibility intervals are calculated from the posterior distribution, which was developed by considering each group variance to be a random variable about which nothing is known a priori, then eliminating it by integration. Numerical experiments using two test problems illustrate some characteristics of the credibility intervals. Nonlinearity of the statistical model greatly affected some of the credibility intervals, indicating that credibility intervals computed using the standard linear model approximation may often be inadequate to characterize uncertainty for actual field problems. The parameter characterizing the probability level for the credibility intervals was, however, accurately computed using a linear model approximation, as compared with values calculated using second-order and fully nonlinear formulations. This allows the credibility intervals to be computed very efficiently.Simultaneous Scheffe-type credibility intervals for variables of a groundwater flow model calibrated using a Bayesian maximum a posteriori procedure were developed. The variances reflecting the expected differences between the observed and model-computed quantities were unknown, and variance variability from observation to observation was approximated by grouping the data so that each group was characterized by a uniform variance. Nonlinearity

  8. Confidence-based somatic mutation evaluation and prioritization.

    PubMed

    Löwer, Martin; Renard, Bernhard Y; de Graaf, Jos; Wagner, Meike; Paret, Claudia; Kneip, Christoph; Türeci, Ozlem; Diken, Mustafa; Britten, Cedrik; Kreiter, Sebastian; Koslowski, Michael; Castle, John C; Sahin, Ugur

    2012-01-01

    Next generation sequencing (NGS) has enabled high throughput discovery of somatic mutations. Detection depends on experimental design, lab platforms, parameters and analysis algorithms. However, NGS-based somatic mutation detection is prone to erroneous calls, with reported validation rates near 54% and congruence between algorithms less than 50%. Here, we developed an algorithm to assign a single statistic, a false discovery rate (FDR), to each somatic mutation identified by NGS. This FDR confidence value accurately discriminates true mutations from erroneous calls. Using sequencing data generated from triplicate exome profiling of C57BL/6 mice and B16-F10 melanoma cells, we used the existing algorithms GATK, SAMtools and SomaticSNiPer to identify somatic mutations. For each identified mutation, our algorithm assigned an FDR. We selected 139 mutations for validation, including 50 somatic mutations assigned a low FDR (high confidence) and 44 mutations assigned a high FDR (low confidence). All of the high confidence somatic mutations validated (50 of 50), none of the 44 low confidence somatic mutations validated, and 15 of 45 mutations with an intermediate FDR validated. Furthermore, the assignment of a single FDR to individual mutations enables statistical comparisons of lab and computation methodologies, including ROC curves and AUC metrics. Using the HiSeq 2000, single end 50 nt reads from replicates generate the highest confidence somatic mutation call set. PMID:23028300

  9. Confidence-based Somatic Mutation Evaluation and Prioritization

    PubMed Central

    de Graaf, Jos; Wagner, Meike; Paret, Claudia; Kneip, Christoph; Türeci, Özlem; Diken, Mustafa; Britten, Cedrik; Kreiter, Sebastian; Koslowski, Michael; Castle, John C.; Sahin, Ugur

    2012-01-01

    Next generation sequencing (NGS) has enabled high throughput discovery of somatic mutations. Detection depends on experimental design, lab platforms, parameters and analysis algorithms. However, NGS-based somatic mutation detection is prone to erroneous calls, with reported validation rates near 54% and congruence between algorithms less than 50%. Here, we developed an algorithm to assign a single statistic, a false discovery rate (FDR), to each somatic mutation identified by NGS. This FDR confidence value accurately discriminates true mutations from erroneous calls. Using sequencing data generated from triplicate exome profiling of C57BL/6 mice and B16-F10 melanoma cells, we used the existing algorithms GATK, SAMtools and SomaticSNiPer to identify somatic mutations. For each identified mutation, our algorithm assigned an FDR. We selected 139 mutations for validation, including 50 somatic mutations assigned a low FDR (high confidence) and 44 mutations assigned a high FDR (low confidence). All of the high confidence somatic mutations validated (50 of 50), none of the 44 low confidence somatic mutations validated, and 15 of 45 mutations with an intermediate FDR validated. Furthermore, the assignment of a single FDR to individual mutations enables statistical comparisons of lab and computation methodologies, including ROC curves and AUC metrics. Using the HiSeq 2000, single end 50 nt reads from replicates generate the highest confidence somatic mutation call set. PMID:23028300

  10. The Development of Confidence Limits for Fatigue Strength Data

    SciTech Connect

    SUTHERLAND,HERBERT J.; VEERS,PAUL S.

    1999-11-09

    Over the past several years, extensive databases have been developed for the S-N behavior of various materials used in wind turbine blades, primarily fiberglass composites. These data are typically presented both in their raw form and curve fit to define their average properties. For design, confidence limits must be placed on these descriptions. In particular, most designs call for the 95/95 design values; namely, with a 95% level of confidence, the designer is assured that 95% of the material will meet or exceed the design value. For such material properties as the ultimate strength, the procedures for estimating its value at a particular confidence level is well defined if the measured values follow a normal or a log-normal distribution. Namely, based upon the number of sample points and their standard deviation, a commonly-found table may be used to determine the survival percentage at a particular confidence level with respect to its mean value. The same is true for fatigue data at a constant stress level (the number of cycles to failure N at stress level S{sub 1}). However, when the stress level is allowed to vary, as with a typical S-N fatigue curve, the procedures for determining confidence limits are not as well defined. This paper outlines techniques for determining confidence limits of fatigue data. Different approaches to estimating the 95/95 level are compared. Data from the MSU/DOE and the FACT fatigue databases are used to illustrate typical results.

  11. Interval velocity analysis using wave field continuation

    SciTech Connect

    Zhusheng, Z. )

    1992-01-01

    In this paper, the author proposes a new interval velocity inversion method which, based on wave field continuation theory and fuzzy decision theory, uses CMP seismic gathers to automatically estimate interval velocity and two-way travel time in layered medium. The interval velocity calculated directly from wave field continuation is not well consistent with that derived from VSP data, the former is usually higher than the latter. Three major factors which influence the accuracy of interval velocity from wave field continuation are corrected, so that the two kinds of interval velocity are well consistent. This method brings better interval velocity, adapts weak reflection waves and resists noise well. It is a feasible method.

  12. Capacitated max -Batching with Interval Graph Compatibilities

    NASA Astrophysics Data System (ADS)

    Nonner, Tim

    We consider the problem of partitioning interval graphs into cliques of bounded size. Each interval has a weight, and the weight of a clique is the maximum weight of any interval in the clique. This natural graph problem can be interpreted as a batch scheduling problem. Solving a long-standing open problem, we show NP-hardness, even if the bound on the clique sizes is constant. Moreover, we give a PTAS based on a novel dynamic programming technique for this case.

  13. A note on the path interval distance.

    PubMed

    Coons, Jane Ivy; Rusinko, Joseph

    2016-06-01

    The path interval distance accounts for global congruence between locally incongruent trees. We show that the path interval distance provides a lower bound for the nearest neighbor interchange distance. In contrast to the Robinson-Foulds distance, random pairs of trees are unlikely to be maximally distant from one another under the path interval distance. These features indicate that the path interval distance should play a role in phylogenomics where the comparison of trees on a fixed set of taxa is becoming increasingly important. PMID:27040521

  14. Parenting Confidence and Needs for Parents of Newborns in Taiwan

    PubMed Central

    Kuo, Ching-Pyng; Chuang, Hsiao-Ling; Lee, Shu-Hsin; Liao, Wen-Chun; Chang, Li-Yu; Lee, Meng-Chih

    2012-01-01

    Objective Parenting confidence with regards to caring for their infants is crucial for the healthy adaptation to parenthood and the development of positive parent-infant relationships. The postpartum period is a tremendous transitional time for parents, so their unique needs should be considered. This study explored parenting confidence and needs in parents when their newborns are discharged from hospital, and explored the best predictors of parenting confidence and needs. Methods A cross-sectional design with a questionnaire survey was used in this study. The questionnaire included three parts: Demographic, Parenting Needs and Parenting Confidence Questionnaire. We survey a convenience sample of 96 parents from a postnatal ward and a neonatal intermediate care unit of the medical central hospital in Taichung, Taiwan. Findings The mean age of the subjects was 32 years and 67.7% of the subjects’ education level was college or above. Approximately one half of the subjects was multiparous, vaginal delivery and had planned pregnancy. The mean gestational age and birth weight of the newborns was 37.7 weeks and 2902 g, respectively. Parents who had a planned pregnancy (t=2.1, P=0.04) or preterm infants (t=2.0, P=0.046) and those whose infants were delivered by cesarean section (t=2.2, P=0.03) had higher parenting needs. In addition, parents of low birth weight infants had higher parenting needs (r=-0.23, P=0.02). Regarding parenting confidence, multipara parents perceived higher confidence than primipara parents (t=2.9, P=0.005). Needs in psychosocial support were significantly correlated with parenting confidence (r=0.21, P<0.05). The stepwise multiple regression analysis showed that parity and needs in psychosocial support predict parenting confidence of 13.8% variance. Conclusion The findings of this study help care providers to identify parents with low parenting confidence at an early postpartum stage. Health care teams should provide appropriate psychosocial

  15. Interval and Contour Processing in Autism

    ERIC Educational Resources Information Center

    Heaton, Pamela

    2005-01-01

    High functioning children with autism and age and intelligence matched controls participated in experiments testing perception of pitch intervals and musical contours. The finding from the interval study showed superior detection of pitch direction over small pitch distances in the autism group. On the test of contour discrimination no group…

  16. Optimal Approximation of Quadratic Interval Functions

    NASA Technical Reports Server (NTRS)

    Koshelev, Misha; Taillibert, Patrick

    1997-01-01

    Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.

  17. SINGLE-INTERVAL GAS PERMEABILITY ESTIMATION

    EPA Science Inventory

    Single-interval, steady-steady-state gas permeability testing requires estimation of pressure at a screened interval which in turn requires measurement of friction factors as a function of mass flow rate. Friction factors can be obtained by injecting air through a length of pipe...

  18. Market Confidence Predicts Stock Price: Beyond Supply and Demand.

    PubMed

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi; Zhang, Yuqing

    2016-01-01

    Stock price prediction is an important and challenging problem in stock market analysis. Existing prediction methods either exploit autocorrelation of stock price and its correlation with the supply and demand of stock, or explore predictive indictors exogenous to stock market. In this paper, using transaction record of stocks with identifier of traders, we introduce an index to characterize market confidence, i.e., the ratio of the number of traders who is active in two successive trading days to the number of active traders in a certain trading day. Strong Granger causality is found between the index of market confidence and stock price. We further predict stock price by incorporating the index of market confidence into a neural network based on time series of stock price. Experimental results on 50 stocks in two Chinese Stock Exchanges demonstrate that the accuracy of stock price prediction is significantly improved by the inclusion of the market confidence index. This study sheds light on using cross-day trading behavior to characterize market confidence and to predict stock price. PMID:27391816

  19. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    PubMed

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840

  20. Characteristics of successful opinion leaders in a bounded confidence model

    NASA Astrophysics Data System (ADS)

    Chen, Shuwei; Glass, David H.; McCartney, Mark

    2016-05-01

    This paper analyses the impact of competing opinion leaders on attracting followers in a social group based on a bounded confidence model in terms of four characteristics: reputation, stubbornness, appeal and extremeness. In the model, reputation differs among leaders and normal agents based on the weights assigned to them, stubbornness of leaders is reflected by their confidence towards normal agents, appeal of the leaders is represented by the confidence of followers towards them, and extremeness is captured by the opinion values of leaders. Simulations show that increasing reputation, stubbornness or extremeness makes it more difficult for the group to achieve consensus, but increasing the appeal will make it easier. The results demonstrate that successful opinion leaders should generally be less stubborn, have greater appeal and be less extreme in order to attract more followers in a competing environment. Furthermore, the number of followers can be very sensitive to small changes in these characteristics. On the other hand, reputation has a more complicated impact: higher reputation helps the leader to attract more followers when the group bound of confidence is high, but can hinder the leader from attracting followers when the group bound of confidence is low.

  1. Assessment of biomedical knowledge according to confidence criteria.

    PubMed

    Jilani, Ines; Grabar, Natalia; Meneton, Pierre; Jaulent, Marie-Christine

    2008-01-01

    The characterisation of biomedical knowledge taking into account the degree of confidence expressed in texts, is an important issue in the biomedical domain. The authors of scientific texts use grammatical and lexical devices to qualify their assertions. We named these markers of qualification "confidence markers". We present here the results of our efforts to collect confidence markers from full texts and abstracts, to classify them on the basis of semantics, and their use within our knowledge extraction system. We propose in this study, an implementation of these confidence markers for functional annotation of the human gene Apolipoprotein (APOE) thought to be involved in Alzheimer's disease. As a result, we obtain, through the extraction system, triplets: (G, F, PMID), in which G is the gene APOE, F is its function found in texts and the PMID of the article from which this knowledge was extracted. Moreover, a spatial 3D of the triplets, relative to each other, is proposed depending on their respective confidence degree. PMID:18487731

  2. Emotor control: computations underlying bodily resource allocation, emotions, and confidence

    PubMed Central

    Kepecs, Adam; Mensh, Brett D.

    2015-01-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience—approaching subjective behavior as the result of mental computations instantiated in the brain—to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This “emotor” control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on “confidence.” Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840

  3. New graduate nurses' experiences about lack of professional confidence.

    PubMed

    Ortiz, Jennifer

    2016-07-01

    Professional confidence is an essential trait for new graduate nurses to possess in order to provide quality patient care in today's complex hospital setting. However, many new graduates are entering the workforce without it and this remains to be explored. This study describes how new graduate nurses accounted for their lack of professional confidence upon entry into professional practice and how it developed during their first year of practice in the hospital setting. Two face-to-face, individual interviews of 12 participants were utilized to capture the lived experiences of new graduate nurses to gain an understanding of this phenomenon. After manual content analysis seven themes emerged: communication is huge, making mistakes, disconnect between school and practice, independence, relationship building, positive feedback is important, and gaining experience. The findings indicate that the development of professional confidence is a dynamic process that occurs throughout the first year of practice. New graduate nurses must experience both positive and negative circumstances in order to move toward the attainment of professional confidence. Knowing this, nurse educators in academia as well as in the hospital setting may better support the development of professional confidence both before and during the first year of practice. PMID:27428687

  4. Market Confidence Predicts Stock Price: Beyond Supply and Demand

    PubMed Central

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi; Zhang, Yuqing

    2016-01-01

    Stock price prediction is an important and challenging problem in stock market analysis. Existing prediction methods either exploit autocorrelation of stock price and its correlation with the supply and demand of stock, or explore predictive indictors exogenous to stock market. In this paper, using transaction record of stocks with identifier of traders, we introduce an index to characterize market confidence, i.e., the ratio of the number of traders who is active in two successive trading days to the number of active traders in a certain trading day. Strong Granger causality is found between the index of market confidence and stock price. We further predict stock price by incorporating the index of market confidence into a neural network based on time series of stock price. Experimental results on 50 stocks in two Chinese Stock Exchanges demonstrate that the accuracy of stock price prediction is significantly improved by the inclusion of the market confidence index. This study sheds light on using cross-day trading behavior to characterize market confidence and to predict stock price. PMID:27391816

  5. The dose delivery effect of the different Beam ON interval in FFF SBRT: TrueBEAM

    NASA Astrophysics Data System (ADS)

    Tawonwong, T.; Suriyapee, S.; Oonsiri, S.; Sanghangthum, T.; Oonsiri, P.

    2016-03-01

    The purpose of this study is to determine the dose delivery effect of the different Beam ON interval in Flattening Filter Free Stereotactic Body Radiation Therapy (FFF-SBRT). The three 10MV-FFF SBRT plans (2 half rotating Rapid Arc, 9 to10 Gray/Fraction) were selected and irradiated in three different intervals (100%, 50% and 25%) using the RPM gating system. The plan verification was performed by the ArcCHECK for gamma analysis and the ionization chamber for point dose measurement. The dose delivery time of each interval were observed. For gamma analysis (2%&2mm criteria), the average percent pass of all plans for 100%, 50% and 25% intervals were 86.1±3.3%, 86.0±3.0% and 86.1±3.3%, respectively. For point dose measurement, the average ratios of each interval to the treatment planning were 1.012±0.015, 1.011±0.014 and 1.011±0.013 for 100%, 50% and 25% interval, respectively. The average dose delivery time was increasing from 74.3±5.0 second for 100% interval to 154.3±12.6 and 347.9±20.3 second for 50% and 25% interval, respectively. The same quality of the dose delivery from different Beam ON intervals in FFF-SBRT by TrueBEAM was illustrated. While the 100% interval represents the breath-hold treatment technique, the differences for the free-breathing using RPM gating system can be treated confidently.

  6. Interval colorectal carcinoma: An unsolved debate.

    PubMed

    Benedict, Mark; Galvao Neto, Antonio; Zhang, Xuchen

    2015-12-01

    Colorectal carcinoma (CRC), as the third most common new cancer diagnosis, poses a significant health risk to the population. Interval CRCs are those that appear after a negative screening test or examination. The development of interval CRCs has been shown to be multifactorial: location of exam-academic institution versus community hospital, experience of the endoscopist, quality of the procedure, age of the patient, flat versus polypoid neoplasia, genetics, hereditary gastrointestinal neoplasia, and most significantly missed or incompletely excised lesions. The rate of interval CRCs has decreased in the last decade, which has been ascribed to an increased understanding of interval disease and technological advances in the screening of high risk individuals. In this article, we aim to review the literature with regard to the multifactorial nature of interval CRCs and provide the most recent developments regarding this important gastrointestinal entity. PMID:26668498

  7. Interval colorectal carcinoma: An unsolved debate

    PubMed Central

    Benedict, Mark; Neto, Antonio Galvao; Zhang, Xuchen

    2015-01-01

    Colorectal carcinoma (CRC), as the third most common new cancer diagnosis, poses a significant health risk to the population. Interval CRCs are those that appear after a negative screening test or examination. The development of interval CRCs has been shown to be multifactorial: location of exam-academic institution versus community hospital, experience of the endoscopist, quality of the procedure, age of the patient, flat versus polypoid neoplasia, genetics, hereditary gastrointestinal neoplasia, and most significantly missed or incompletely excised lesions. The rate of interval CRCs has decreased in the last decade, which has been ascribed to an increased understanding of interval disease and technological advances in the screening of high risk individuals. In this article, we aim to review the literature with regard to the multifactorial nature of interval CRCs and provide the most recent developments regarding this important gastrointestinal entity. PMID:26668498

  8. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  9. Confidence region estimation techniques for nonlinear regression :three case studies.

    SciTech Connect

    Swiler, Laura Painton (Sandia National Laboratories, Albuquerque, NM); Sullivan, Sean P. (University of Texas, Austin, TX); Stucky-Mack, Nicholas J. (Harvard University, Cambridge, MA); Roberts, Randall Mark; Vugrin, Kay White

    2005-10-01

    This work focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F-test method, and a Log-Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well-test results. The confidence regions for each case study are analyzed and compared. Although the F-test and Log-Likelihood methods result in similar regions, there are differences between these regions and the regions generated by the linear approximation method for nonlinear problems. The differing results, capabilities, and drawbacks of all three methods are discussed.

  10. Does mood influence the realism of confidence judgments?

    PubMed

    Allwood, Carl Martin; Granhag, Pär Anders; Jonsson, Anna-Carin

    2002-07-01

    Previous research has shown that mood affects cognition, but the extent to which mood affects meta-cognitive judgments is a relatively over-looked issue. In the current study we investigated how mood influences the degree of realism in participants' confidence judgments (based on an episodic memory task). Using music and film in combination, we successfully induced an elated mood in half of the participants, but failed to induce a sad mood in the other half. In line with previous research, the participants in both conditions were overconfident in their judgments. However, and contrary to our prediction, our data indicated that there was no difference in the realism of the confidence between the conditions. When relating this result to previous research, our conclusion is that there is no, or very little, influence of mood of moderate intensity on the realism of confidence judgments. PMID:12184480

  11. Confidence as a Common Currency between Vision and Audition

    PubMed Central

    de Gardelle, Vincent; Le Corre, François; Mamassian, Pascal

    2016-01-01

    The idea of a common currency underlying our choice behaviour has played an important role in sciences of behaviour, from neurobiology to psychology and economics. However, while it has been mainly investigated in terms of values, with a common scale on which goods would be evaluated and compared, the question of a common scale for subjective probabilities and confidence in particular has received only little empirical investigation so far. The present study extends previous work addressing this question, by showing that confidence can be compared across visual and auditory decisions, with the same precision as for the comparison of two trials within the same task. We discuss the possibility that confidence could serve as a common currency when describing our choices to ourselves and to others. PMID:26808061

  12. Assessing recognition memory using confidence ratings and response times.

    PubMed

    Weidemann, Christoph T; Kahana, Michael J

    2016-04-01

    Classification of stimuli into categories (such as 'old' and 'new' in tests of recognition memory or 'present' versus 'absent' in signal detection tasks) requires the mapping of internal signals to discrete responses. Introspective judgements about a given choice response are regularly employed in research, legal and clinical settings in an effort to measure the signal that is thought to be the basis of the classification decision. Correlations between introspective judgements and task performance suggest that such ratings often do convey information about internal states that are relevant for a given task, but well-known limitations of introspection call the fidelity of this information into question. We investigated to what extent response times can reveal information usually assessed with explicit confidence ratings. We quantitatively compared response times to confidence ratings in their ability to qualify recognition memory decisions and found convergent results suggesting that much of the information from confidence ratings can be obtained from response times. PMID:27152209

  13. Are effect sizes and confidence levels problems for or solutions to the null hypothesis test?

    PubMed

    Riopelle, A J

    2000-04-01

    Some have proposed that the null hypothesis significance test, as usually conducted using the t test of the difference between means, is an impediment to progress in psychology. To improve its prospects, using Neyman-Pearson confidence intervals and Cohen's standardized effect sizes, d, is recommended. The purpose of these approaches is to enable us to understand what can appropriately be said about the distances between the means and their reliability. Others have written extensively that these recommended strategies are highly interrelated and use identical information. This essay was written to remind us that the t test, based on the sample--not the true--standard deviation, does not apply solely to distance between means. The t test pertains to a much more ambiguous specification: the difference between samples, including sampling variations of the standard deviation. PMID:10843262

  14. Variance and bias confidence criteria for ERA modal parameter identification. [Eigensystem Realization Algorithm

    NASA Technical Reports Server (NTRS)

    Longman, Richard W.; Bergmann, Martin; Juang, Jer-Nan

    1988-01-01

    For the ERA system identification algorithm, perturbation methods are used to develop expressions for variance and bias of the identified modal parameters. Based on the statistics of the measurement noise, the variance results serve as confidence criteria by indicating how likely the true parameters are to lie within any chosen interval about their identified values. This replaces the use of expensive and time-consuming Monte Carlo computer runs to obtain similar information. The bias estimates help guide the ERA user in his choice of which data points to use and how much data to use in order to obtain the best results, performing the trade-off between the bias and scatter. Also, when the uncertainty in the bias is sufficiently small, the bias information can be used to correct the ERA results. In addition, expressions for the variance and bias of the singular values serve as tools to help the ERA user decide the proper modal order.

  15. Golfers have better balance control and confidence than healthy controls.

    PubMed

    Gao, Kelly L; Hui-Chan, Christina W Y; Tsang, William W N

    2011-11-01

    In a well-executed golf swing, golfers must maintain good balance and precise control of posture. Golfing also requires prolonged walking over uneven ground such as a hilly course. Therefore, repeated golf practice may enhance balance control and confidence in the golfers. The objective is to investigate whether older golfers had better balance control and confidence than non-golfing older, healthy adults. This is a cross-sectional study, conducted at a University-based rehabilitation center. Eleven golfers and 12 control subjects (all male; mean age: 66.2 ± 6.8 and 71.3 ± 6.6 years, respectively) were recruited. Two balance control tests were administered: (1) functional reach test which measured subjects' maximum forward distance in standing; (2) sensory organization test (SOT) which examined subjects' abilities to use somatosensory, visual, and vestibular inputs to control body sway during stance. The modified Activities-specific Balance Confidence (ABC) determined subject's balance confidence in daily activities. The golfers were found to achieve significantly longer distance in the functional reach test than controls. They manifested significantly better balance than controls in the visual ratio and vestibular ratio, but not the somatosensory ratio of the SOT. The golfers also reported significantly higher balance confidence score ratios. Furthermore, older adults' modified ABC score ratios showed positive correlations with functional reach, visual and vestibular ratios, but not with somatosensory ratio. Golfing is an activity which may enhance both the physical and psychological aspects of balance control. Significant correlations between these measures reveal the importance of the balance control under reduced or conflicting sensory conditions in older adults' balance confidence in their daily activities. Since cause-and-effect could not be established in the present cross-sectional study, further prospective intervention design is warranted. PMID

  16. Sparse Multidimensional Patient Modeling using Auxiliary Confidence Labels

    PubMed Central

    Heim, Eric; Hauskrecht, Milos

    2016-01-01

    In this work, we focus on the problem of learning a classification model that performs inference on patient Electronic Health Records (EHRs). Often, a large amount of costly expert supervision is required to learn such a model. To reduce this cost, we obtain confidence labels that indicate how sure an expert is in the class labels she provides. If meaningful confidence information can be incorporated into a learning method, fewer patient instances may need to be labeled to learn an accurate model. In addition, while accuracy of predictions is important for any inference model, a model of patients must be interpretable so that clinicians can understand how the model is making decisions. To these ends, we develop a novel metric learning method called Confidence bAsed MEtric Learning (CAMEL) that supports inclusion of confidence labels, but also emphasizes interpretability in three ways. First, our method induces sparsity, thus producing simple models that use only a few features from patient EHRs. Second, CAMEL naturally produces confidence scores that can be taken into consideration when clinicians make treatment decisions. Third, the metrics learned by CAMEL induce multidimensional spaces where each dimension represents a different “factor” that clinicians can use to assess patients. In our experimental evaluation, we show on a real-world clinical data set that our CAMEL methods are able to learn models that are as or more accurate as other methods that use the same supervision. Furthermore, we show that when CAMEL uses confidence scores it is able to learn models as or more accurate as others we tested while using only 10% of the training instances. Finally, we perform qualitative assessments on the metrics learned by CAMEL and show that they identify and clearly articulate important factors in how the model performs inference. PMID:26949568

  17. Calibrated Peer Review Essays Increase Confidence in Self-assessment

    NASA Astrophysics Data System (ADS)

    Likkel, Lauren

    2006-12-01

    We studied the effect of the web-based tool “Calibrated Peer Review” ™ on student confidence in their ability to recognize the quality of their own work. CPR can be used in large enrollment classes to allow a controlled peer review of moderate length student essays. We expected that teaching students how to grade an essay and having them grade their own work would increase confidence in assessing the quality of their own essays, and the results support this. Three introductory astronomy classes participated in this study during 2005 at the University of Wisconsin Eau Claire, a four year university. Four essays were assigned in both the experimental class (104 students) and the control classes (34 students). In the comparison classes, the student was given a score on the essay and perhaps a few written comments. The experimental group used the CPR tool, in which they are taught how to evaluate the essay, evaluate assignments written by peers, and evaluate their own essay. Three survey questions were used to characterize the change in confidence level in ability to assess their own work. The survey results from a survey at the end of the semester were compared to results from the same survey administered at the beginning of the semester. A measurable effect on the average confidence level of the experimental class was found. By the final survey, significantly more of the CPR students had changed to a more positive statement in indicating their confidence in evaluating their own written work. There was no effect seen on the classes that wrote essays but did not use the CPR system, showing that this result is due to using the CPR system for the essays, not just writing essays or becoming more confident during the course of the semester.

  18. Extended score interval in the assessment of basic surgical skills

    PubMed Central

    Acosta, Stefan; Sevonius, Dan; Beckman, Anders

    2015-01-01

    Introduction The Basic Surgical Skills course uses an assessment score interval of 0–3. An extended score interval, 1–6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0–3 scored version compared to a proposed 1–6 scored version. Methods Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). Results The distribution of scores for ‘knot tying’ at the last time point and ‘bowel anastomosis side to side’ given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77–0.78) on Day 1 to 0.83 (95% CI 0.51–0.94) on Day 2. Discussion A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course. PMID:25636607

  19. The effect of terrorism on public confidence : an exploratory study.

    SciTech Connect

    Berry, M. S.; Baldwin, T. E.; Samsa, M. E.; Ramaprasad, A.; Decision and Information Sciences

    2008-10-31

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the metrics it uses to assess the consequences of terrorist attacks. Hence, several factors--including a detailed understanding of the variations in public confidence among individuals, by type of terrorist event, and as a function of time--are critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data were collected from the groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery bombing, and cyber intrusions on financial institutions that resulted in identity theft and financial losses. Our findings include the following: (a) the subjects can be classified into at least three distinct groups on the basis of their baseline outlook--optimistic, pessimistic, and unaffected; (b) the subjects make discriminations in their interpretations of an event on the basis of the nature of a terrorist attack, the time horizon, and its impact; (c) the recovery of confidence after a terrorist event has an incubation period and typically does not return to its initial level in the long-term; (d) the patterns of recovery of confidence differ between the optimists and the pessimists; and (e) individuals are able to associate a monetary value with a loss or gain in confidence, and the value associated with a loss is greater than the value associated with a gain. These findings illustrate the importance the public places in their confidence in government

  20. Physiology and its Importance for Reference Intervals

    PubMed Central

    Sikaris, Kenneth A

    2014-01-01

    Reference intervals are ideally defined on apparently healthy individuals and should be distinguished from clinical decision limits that are derived from known diseased patients. Knowledge of physiological changes is a prerequisite for understanding and developing reference intervals. Reference intervals may differ for various subpopulations because of differences in their physiology, most obviously between men and women, but also in childhood, pregnancy and the elderly. Changes in laboratory measurements may be due to various physiological factors starting at birth including weaning, the active toddler, immunological learning, puberty, pregnancy, menopause and ageing. The need to partition reference intervals is required when there are significant physiological changes that need to be recognised. It is important that laboratorians are aware of these changes otherwise reference intervals that attempt to cover a widened inter-individual variability may lose their usefulness. It is virtually impossible for any laboratory to directly develop reference intervals for each of the physiological changes that are currently known, however indirect techniques can be used to develop or validate reference intervals in some difficult situations such as those for children. Physiology describes our life’s journey, and it is only when we are familiar with that journey that we can appreciate a pathological departure. PMID:24659833

  1. Assessing Confidence in Pliocene Sea Surface Temperatures to Evaluate Predictive Models

    NASA Technical Reports Server (NTRS)

    Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling. M.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; Bragg, Fran J.; Lunt, Daniel J.; Stoll, Danielle K.; Foley, Kevin M.; Riesselman, Christina

    2012-01-01

    In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.33.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history.This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.

  2. Assessing confidence in Pliocene sea surface temperatures to evaluate predictive models

    USGS Publications Warehouse

    Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling M.; Stoll, Danielle K.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; Bragg, Fran J.; Lunt, Daniel J.; Foley, Kevin M.; Riesselman, Christina R.

    2012-01-01

    In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.3–3.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history. This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.

  3. Importance of QT interval in clinical practice.

    PubMed

    Ambhore, Anand; Teo, Swee-Guan; Bin Omar, Abdul Razakjr; Poh, Kian-Keong

    2014-12-01

    Long QT interval is an important finding that is often missed by electrocardiogram interpreters. Long QT syndrome (inherited and acquired) is a potentially lethal cardiac channelopathy that is frequently mistaken for epilepsy. We present a case of long QT syndrome with multiple cardiac arrests presenting as syncope and seizures. The long QTc interval was aggravated by hypomagnesaemia and drugs, including clarithromycin and levofloxacin. Multiple drugs can cause prolongation of the QT interval, and all physicians should bear this in mind when prescribing these drugs. PMID:25630313

  4. Short Interval Leaf Movements of Cotton 12

    PubMed Central

    Miller, Charles S.

    1975-01-01

    Gossypium hirsutum L. cv. Lankart plants exhibited three different types of independent short interval leaf movements which were superimposed on the circadian movements. The different types were termed SIRV (short interval rhythmical vertical), SIHM (short interval horizontal movements), and SHAKE (short stroked SIRV). The 36-minute period SIRV movements occurred at higher moisture levels. The 176-minute period SIHM occurred at lower moisture levels and ceased as the stress increased. The SHAKE movements were initiated with further stresses. The SLEEP (circadian, diurnal) movements ceased with further stress. The last to cease just prior to permanent wilting were the SHAKE movements. PMID:16659123

  5. Signatures of a Statistical Computation in the Human Sense of Confidence.

    PubMed

    Sanders, Joshua I; Hangya, Balázs; Kepecs, Adam

    2016-05-01

    Human confidence judgments are thought to originate from metacognitive processes that provide a subjective assessment about one's beliefs. Alternatively, confidence is framed in mathematics as an objective statistical quantity: the probability that a chosen hypothesis is correct. Despite similar terminology, it remains unclear whether the subjective feeling of confidence is related to the objective, statistical computation of confidence. To address this, we collected confidence reports from humans performing perceptual and knowledge-based psychometric decision tasks. We observed two counterintuitive patterns relating confidence to choice and evidence: apparent overconfidence in choices based on uninformative evidence, and decreasing confidence with increasing evidence strength for erroneous choices. We show that these patterns lawfully arise from statistical confidence, and therefore occur even for perfectly calibrated confidence measures. Furthermore, statistical confidence quantitatively accounted for human confidence in our tasks without necessitating heuristic operations. Accordingly, we suggest that the human feeling of confidence originates from a mental computation of statistical confidence. PMID:27151640

  6. Building Academic Confidence in English Language Learners in Elementary School

    ERIC Educational Resources Information Center

    Vazquez, Alejandra

    2014-01-01

    Non-English speaking students lack the confidence and preparation to be verbally actively engaged in the classroom. Students may frequently display hesitation in learning to speak English, and may also lack a teacher's guidance in becoming proficient English speakers. The purpose of this research is to examine how teachers build academic…

  7. Technology in Teaching: Just How Confident Are Preservice Teachers?

    ERIC Educational Resources Information Center

    Molebash, Philip; Milman, Natalie

    This paper examines the effectiveness of increasing the confidence of preservice teachers in using technology for personal and instructional purposes as a result of participating in an introductory educational technology course offered at the University of Virginia's Curry School of Education. Course participants attended sections designed for…

  8. Confidence bands for measured economically optimal nitrogen rates

    Technology Transfer Automated Retrieval System (TEKTRAN)

    While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...

  9. Gender Difference of Confidence in Using Technology for Learning

    ERIC Educational Resources Information Center

    Yau, Hon Keung; Cheng, Alison Lai Fong

    2012-01-01

    Past studies have found male students to have more confidence in using technology for learning than do female students. Males tend to have more positive attitudes about the use of technology for learning than do females. According to the Women's Foundation (2006), few studies examined gender relevant research in Hong Kong. It also appears that no…

  10. Knowledge and Confidence of Speech-Language Pathologists Regarding Autism

    ERIC Educational Resources Information Center

    Ray, Julie M.

    2010-01-01

    The increased prevalence rate of autism has immense implications for speech language pathologists (SLPs) who are directly involved in the education and service delivery for students with autism. However, few studies have documented the effectiveness of the knowledge and confidence of SLPs regarding autism. The purpose of this study was to measure…

  11. Disconnections between Teacher Expectations and Student Confidence in Bioethics

    ERIC Educational Resources Information Center

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-01-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students' general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important…

  12. Biased but in Doubt: Conflict and Decision Confidence

    PubMed Central

    De Neys, Wim; Cromheeke, Sofie; Osman, Magda

    2011-01-01

    Human reasoning is often biased by intuitive heuristics. A central question is whether the bias results from a failure to detect that the intuitions conflict with traditional normative considerations or from a failure to discard the tempting intuitions. The present study addressed this unresolved debate by using people's decision confidence as a nonverbal index of conflict detection. Participants were asked to indicate how confident they were after solving classic base-rate (Experiment 1) and conjunction fallacy (Experiment 2) problems in which a cued intuitive response could be inconsistent or consistent with the traditional correct response. Results indicated that reasoners showed a clear confidence decrease when they gave an intuitive response that conflicted with the normative response. Contrary to popular belief, this establishes that people seem to acknowledge that their intuitive answers are not fully warranted. Experiment 3 established that younger reasoners did not yet show the confidence decrease, which points to the role of improved bias awareness in our reasoning development. Implications for the long standing debate on human rationality are discussed. PMID:21283574

  13. Analyzing Student Confidence in Classroom Voting with Multiple Choice Questions

    ERIC Educational Resources Information Center

    Stewart, Ann; Storm, Christopher; VonEpps, Lahna

    2013-01-01

    The purpose of this paper is to present results of a recent study in which students voted on multiple choice questions in mathematics courses of varying levels. Students used clickers to select the best answer among the choices given; in addition, they were also asked whether they were confident in their answer. In this paper we analyze data…

  14. Panel Discussion and the Development of Students' Self Confidence

    ERIC Educational Resources Information Center

    Anwar, Khoirul

    2016-01-01

    This study is to analyze the use of panel discussion towards the development of students' self confidence in learning the content subject of qualitative research concept. The study uses mix-method in which questionnaire and interview are conducted at the class of qualitative research of the sixth semester consisting twenty students especially…

  15. Expanding Horizons--Into the Future with Confidence!

    ERIC Educational Resources Information Center

    Volk, Valerie

    2006-01-01

    Gifted students often show a deep interest in and profound concern for the complex issues of society. Given the leadership potential of these students and their likely responsibility for solving future social problems, they need to develop this awareness and also a sense of confidence in dealing with future issues. The Future Problem Solving…

  16. Using Online EFL Interaction to Increase Confidence, Motivation, and Ability

    ERIC Educational Resources Information Center

    Wu, Wen-chi Vivian; Yen, Ling Ling; Marek, Michael

    2011-01-01

    Teachers of English as a Foreign Language (EFL) in Taiwan often use an outdated lecture-memorization methodology resulting in low motivation, confidence, and ability on the part of students. Innovative educators are exploring use of technology, such as videoconferences with native speakers, to enrich the classroom; however few guidelines have been…

  17. Family Background, Self-Confidence and Economic Outcomes

    ERIC Educational Resources Information Center

    Filippin, Antonio; Paccagnella, Marco

    2012-01-01

    In this paper we analyze the role played by self-confidence, modeled as beliefs about one's ability, in shaping task choices. We propose a model in which fully rational agents exploit all the available information to update their beliefs using Bayes' rule, eventually learning their true type. We show that when the learning process does not…

  18. Confidence Testing for Knowledge-Based Global Communities

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.

    2009-01-01

    This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…

  19. Multiple Confidence Estimates as Indices of Eyewitness Memory

    ERIC Educational Resources Information Center

    Sauer, James D.; Brewer, Neil; Weber, Nathan

    2008-01-01

    Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated.…

  20. 21 CFR 26.37 - Confidence building activities.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Confidence building activities. 26.37 Section 26.37 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY...

  1. 21 CFR 26.37 - Confidence building activities.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Confidence building activities. 26.37 Section 26.37 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY...

  2. Confidence and Preparedness to Teach: Conflicting Perspectives from Multiple Stakeholders

    ERIC Educational Resources Information Center

    Carter, Pamala J.; Cowan, Kay W.

    2013-01-01

    This article, "Confidence and Preparedness to Teach" is a quantitative study that examines the level of preparedness for the classroom of fifty-seven student teachers. The student teachers, their cooperating teachers, and the professor-in-residence who monitored the placement completed a twenty-four item survey that rated the prospective…

  3. Conquering Confidence: Reflections on a Women's Winter Expedition.

    ERIC Educational Resources Information Center

    Dudley, Catherine; Ferren, Sue; Glackmeyer, Heidi

    1999-01-01

    Three women undertook a four-day winter camping trip in the Adirondacks as their practicum in an outdoor and experiential-education course. Their description of the confidence gained and the balance between self-doubt and overconfidence is compared to walking in snowshoes. (TD)

  4. "The Confidence-Man" and Jobs beyond Academe.

    ERIC Educational Resources Information Center

    Johnson, Mark

    1999-01-01

    Notes that the hopeful threads binding Herman Melville's book "The Confidence-Man" also weave a valuable lesson for literature PhDs considering their career options. Suggests that the values crucial to a teacher or scholar are also crucial to those who work outside the university: willingness to entertain other points of view and an ability to…

  5. Knowledge Surveys in General Chemistry: Confidence, Overconfidence, and Performance

    ERIC Educational Resources Information Center

    Bell, Priscilla; Volckmann, David

    2011-01-01

    Knowledge surveys have been used in a number of fields to assess changes in students' understanding of their own learning and to assist students in review. This study compares metacognitive confidence ratings of students faced with problems on the surveys with their actual knowledge as shown on the final exams in two courses of general chemistry…

  6. 21 CFR 26.37 - Confidence building activities.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Confidence building activities. 26.37 Section 26.37 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL MUTUAL RECOGNITION OF PHARMACEUTICAL GOOD MANUFACTURING PRACTICE REPORTS, MEDICAL DEVICE QUALITY...

  7. North Dakota Leadership Training Boosts Confidence and Involvement

    ERIC Educational Resources Information Center

    Flage, Lynette; Hvidsten, Marie; Vettern, Rachelle

    2012-01-01

    Effective leadership is critical for communities as they work to maintain their vitality and sustainability for years to come. The purpose of the study reported here was to assess confidence levels and community engagement of community leadership program participants in North Dakota State University Extension programs. Through a survey…

  8. On Pupils' Self-Confidence in Mathematics: Gender Comparisons

    ERIC Educational Resources Information Center

    Nurmi, Anu; Hannula, Markku; Maijala, Hanna; Pehkonen, Erkki

    2003-01-01

    In this paper we will concentrate on pupils' self-confidence in mathematics, which belongs to pupils' mathematical beliefs in themselves, and beliefs on achievement in mathematics. Research described consists of a survey of more than 3000 fifth-graders and seventh-graders. Furthermore, 40 pupils participated in a qualitative follow-up study…

  9. State FFA Officers' Confidence and Trustworthiness of Biotechnology Information Sources

    ERIC Educational Resources Information Center

    Wingenbach, Gary J.; Rutherford, Tracy A.

    2007-01-01

    Are state FFA officers' awareness levels of agricultural topics reported in mass media superior to those who do not serve in leadership roles? The purpose of this study was to determine elected state FFA officers' awareness of biotechnology, and their confidence and trust of biotechnology information sources. Descriptive survey methods were used…

  10. Test Anxiety Reduction and Confidence Training: A Replication

    ERIC Educational Resources Information Center

    Bowman, Noah; Driscoll, Richard

    2013-01-01

    This study was undertaken to replicate prior research in which a brief counter-conditioning and confidence training program was found to reduce anxiety and raise test scores. First-semester college students were screened with the Westside Test Anxiety Scale, and the 25 identified as having high or moderately-high anxiety were randomly divided…

  11. Building Confident Teachers: Preservice Physical Education Teachers' Efficacy Beliefs

    ERIC Educational Resources Information Center

    Hand, Karen E.

    2014-01-01

    Understanding teachers' perceptions of their abilities across a variety of teaching strategies can provide insight for understanding teaching effectiveness and program review. Teaching efficacy reflects the degrees of confidence individuals have in their ability to successfully perform specific teaching proficiencies (Bandura, 1986). Additional…

  12. Building and Encouraging Confidence and Creativity in Science.

    ERIC Educational Resources Information Center

    Ryan, Lynnette J.

    The focus of this study is an eight-week science enrichment mentorship program for elementary and middle school girls (ages 8 to 13) at Coleson Village, a public housing community, in an urban area of western Washington. The goal of the program was to build confidence and encourage creativity as the participants discovered themselves as competent…

  13. Acceptance and confidence of central and peripheral misinformation.

    PubMed

    Luna, Karlos; Migueles, Malen

    2009-11-01

    We examined the memory for central and peripheral information concerning a crime and the acceptance of false information. We also studied eyewitnesses' confidence in their memory. Participants were shown a video depicting a bank robbery and a questionnaire was used to introduce false central and peripheral information. The next day the participants completed a recognition task in which they rated the confidence of their responses. Performance was better for central information and participants registered more false alarms for peripheral contents. The cognitive system's limited attentional capacity and the greater information capacity of central elements may facilitate processing the more important information. The presentation of misinformation seriously impaired eyewitness memory by prompting a more lenient response criterion. Participants were more confident with central than with peripheral information. Eyewitness memory is easily distorted in peripheral aspects but it is more difficult to make mistakes with central information. However, when false information is introduced, errors in central information can be accompanied by high confidence, thus rendering them credible and legally serious. PMID:19899643

  14. Utility of de-escalatory confidence-building measures

    SciTech Connect

    Nation, J.

    1989-06-01

    This paper evaluates the utility of specific confidence-building de-escalatory measures and pays special attention to the evaluation of measures which place restrictions on or establish procedures for strategic forces. Some measures appear more promising than others. Potentially useful confidence-building measures largely satisfy defined criteria and include the phased return of strategic nuclear forces to peacetime bases and operations, the termination of interference with communications and NTMs (National Technical Means) and the termination of civil defense preparations. Less-promising CBMs include the standing down of supplemental early warning systems, the establishment of SSBN keep-out zones, and decreases in bomber alert rates. Establishment of SSBN keep-out zones and reduction in bomber rates are difficult to verify, while the standing-down of early warning systems provides little benefit at potentially large costs. Particular confidence-building measures (CBMs) may be most useful in building superpower confidence at specific points in the crisis termination phase. For example, a decrease in strategic bomber alert rates may provide some decrease in perception of the likelihood of war, but its potential costs, particularly in increasing bomber vulnerability, may limit its utility and implementation to the final crisis stages when the risks of re-escalation and surprise attack are lower.

  15. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  16. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  17. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  18. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  19. 37 CFR 1.14 - Patent applications preserved in confidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Patent applications preserved in confidence. 1.14 Section 1.14 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES General...

  20. Academic Behavioural Confidence: A Comparison of Medical and Psychology Students

    ERIC Educational Resources Information Center

    Sanders, Lalage; Sander, Paul

    2007-01-01

    Introduction. Sander, Stevenson, King and Coates (2000) identified differences between medical students in a conventional university and psychology students in a post-1992 university in their responses to different styles of learning and teaching. Method. It had been hypothesised that differing levels of confidence explained why the former felt…

  1. Confidence in Teaching Mathematics among Malaysian Pre-Service Teachers

    ERIC Educational Resources Information Center

    Yunus, Aida Suraya Md.; Hamzah, Ramlah; Ismail, Habsah; Husain, Sharifah Kartini Said; Ismail, Mat Rofa

    2006-01-01

    This study focuses on the confidence level of mathematics education students in teaching school mathematics. Respondents were 165 final year students from four Malaysian universities. It was found that the respondents showed a strong foundation in mathematics upon entrance to the university. In spite of their strong background in school…

  2. Measuring Academic Behavioural Confidence: The ABC Scale Revisited

    ERIC Educational Resources Information Center

    Sander, Paul; Sanders, Lalage

    2009-01-01

    The Academic Behavioural Confidence (ABC) scale has been shown to be valid and can be useful to teachers in understanding their students, enabling the design of more effective teaching sessions with large cohorts. However, some of the between-group differences have been smaller than expected, leading to the hypothesis that the ABC scale many not…

  3. Intact Interval Timing in Circadian CLOCK Mutants

    PubMed Central

    Cordes, Sara; Gallistel, C. R.

    2008-01-01

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

  4. Calibration intervals at Bendix Kansas City

    SciTech Connect

    James, R.T.

    1980-01-01

    The calibration interval evaluation methods and control in each calibrating department of the Bendix Corp., Kansas City Division is described, and a more detailed description of those employed in metrology is provided.

  5. Combination of structural reliability and interval analysis

    NASA Astrophysics Data System (ADS)

    Qiu, Zhiping; Yang, Di; Elishakoff, Isaac

    2008-02-01

    In engineering applications, probabilistic reliability theory appears to be presently the most important method, however, in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs. In this paper, we developed a hybrid of probabilistic and non-probabilistic reliability theory, which describes the structural uncertain parameters as interval variables when statistical data are found insufficient. By using the interval analysis, a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper, and the traditional probabilistic theory is incorporated with the interval analysis. Moreover, the new method preserves the useful part of the traditional probabilistic reliability theory, but removes the restriction of its strict requirement on data acquisition. Example is presented to demonstrate the feasibility and validity of the proposed theory.

  6. Almost primes in almost all short intervals

    NASA Astrophysics Data System (ADS)

    TERÄVÄINEN, JONI

    2016-09-01

    Let $E_k$ be the set of positive integers having exactly $k$ prime factors. We show that almost all intervals $[x,x+\\log^{1+\\varepsilon} x]$ contain $E_3$ numbers, and almost all intervals $[x,x+\\log^{3.51} x]$ contain $E_2$ numbers. By this we mean that there are only $o(X)$ integers $1\\leq x\\leq X$ for which the mentioned intervals do not contain such numbers. The result for $E_3$ numbers is optimal up to the $\\varepsilon$ in the exponent. The theorem on $E_2$ numbers improves a result of Harman, which had the exponent $7+\\varepsilon$ in place of $3.51$. We will also consider general $E_k$ numbers, and find them on intervals whose lengths approach $\\log x$ as $k\\to \\infty$.

  7. Confidence in facial emotion recognition in borderline personality disorder.

    PubMed

    Thome, Janine; Liebke, Lisa; Bungert, Melanie; Schmahl, Christian; Domes, Gregor; Bohus, Martin; Lis, Stefanie

    2016-04-01

    Dysfunctions of social-cognitive processes such as the recognition of emotions have been discussed to contribute to the severe impairments of interpersonal functioning in borderline personality disorder (BPD). By investigating how patients with BPD experience the intensity of different emotions in a facial expression and how confident they are in their own judgments, the current study aimed at identifying subtle alterations of emotion processing in BPD. Female patients with BPD (N = 36) and 36 healthy controls were presented with faces that displayed low-intense anger and happiness or ambiguous expressions of anger and happiness blends. Subjects were asked to rate (a) the intensity of anger and happiness in each facial expression and (b) their confidence in their judgments. Patients with BPD rated the intensity of happiness in happy faces lower than did controls, but did not differ in regard to the assessment of angry or ambiguous facial stimuli or the rating of anger. They reported lower confidence in their judgments, which was particularly pronounced for the assessment of happy facial expressions. The reduced rating of happiness was linked to higher state anger, whereas the reduced confidence in the assessment of happy faces was related to stronger feelings of loneliness and the expectation of social rejection. Our findings suggest alterations in the processing of positive social stimuli that affect both the experience of the emotional intensity and the confidence subjects experience during their assessment. The link to loneliness and social rejection sensitivity points to the necessity to target these alterations in psychotherapeutical interventions. (PsycINFO Database Record PMID:26389624

  8. Improving maternal confidence in neonatal care through a checklist intervention

    PubMed Central

    Radenkovic, Dina; Kotecha, Shrinal; Patel, Shreena; Lakhani, Anjali; Reimann-Dubbers, Katharina; Shah, Shreya; Jafree, Daniyal; Mitrasinovic, Stefan; Whitten, Melissa

    2016-01-01

    Previous qualitative studies suggest a lack of maternal confidence in care of their newborn child upon discharge into the community. This observation was supported by discussion with healthcare professionals and mothers at University College London Hospital (UCLH), highlighting specific areas of concern, in particular identifying and managing common neonatal presentations. The aim of this study was to design and introduce a checklist, addressing concerns, to increase maternal confidence in care of their newborn child. Based on market research, an 8-question checklist was designed, assessing maternal confidence in: feeding, jaundice, nappy care, rashes and dry skin, umbilical cord care, choking, bowel movements, and vomiting. Mothers were assessed as per the checklist, and received a score representative of their confidence in neonatal care. Mothers were followed up with a telephone call, and were assessed after a 7-day-period. Checklist scores before as compared to after the follow-up period were analysed. This process was repeated for three study cycles, with the placement of information posters on the ward prior to the second study cycle, and the stapling of the checklist to the mother's personal child health record (PCHR) prior to the third study cycle. A total of 99 mothers on the Maternity Care Unit at UCLH were enrolled in the study, and 92 were contactable after a 7-day period. During all study cycles, a significant increase in median checklist score was observed after, as compared to before, the 7-day follow up period (p < 0.001). The median difference in checklist score from baseline was greatest for the third cycle. These results suggest that introduction of a simple checklist can be successfully utilised to improve confidence of mothers in being able to care for their newborn child. Further investigation is indicated, but this intervention has the potential for routine application in postnatal care. PMID:27335642

  9. Confidence in one's social beliefs: implications for belief justification.

    PubMed

    Koriat, Asher; Adiv, Shiri

    2012-12-01

    Philosophers commonly define knowledge as justified true beliefs. A heated debate exists, however, about what makes a belief justified. In this article, we examine the question of belief justification from a psychological perspective, focusing on the subjective confidence in a belief that the person has just formed. Participants decided whether to accept or reject a proposition depicting a social belief, and indicated their confidence in their choice. The task was repeated six times, and choice latency was measured. The results were analyzed within a Self-Consistency Model (SCM) of subjective confidence. According to SCM, the decision to accept or reject a proposition is based on the on-line sampling of representations from a pool of representations associated with the proposition. Respondents behave like intuitive statisticians who infer the central tendency of a population based on a small sample. Confidence depends on the consistency with which the belief was supported across the sampled representations, and reflects the likelihood that a new sample will yield the same decision. The results supported the assumption of a commonly shared population of representations associated with each proposition. Based on this assumption, analyses of within-person consistency and cross-person consensus provided support for the model. As expected, choices that deviated from the person's own modal judgment or from the consensually held judgment took relatively longer to form and were associated with relatively lower confidence, presumably because they were based on non-representative samples. The results were discussed in relation to major epistemological theories--foundationalism, coherentism and reliabilism. PMID:22995400

  10. Inferring high-confidence human protein-protein interactions

    PubMed Central

    2012-01-01

    Background As numerous experimental factors drive the acquisition, identification, and interpretation of protein-protein interactions (PPIs), aggregated assemblies of human PPI data invariably contain experiment-dependent noise. Ascertaining the reliability of PPIs collected from these diverse studies and scoring them to infer high-confidence networks is a non-trivial task. Moreover, a large number of PPIs share the same number of reported occurrences, making it impossible to distinguish the reliability of these PPIs and rank-order them. For example, for the data analyzed here, we found that the majority (>83%) of currently available human PPIs have been reported only once. Results In this work, we proposed an unsupervised statistical approach to score a set of diverse, experimentally identified PPIs from nine primary databases to create subsets of high-confidence human PPI networks. We evaluated this ranking method by comparing it with other methods and assessing their ability to retrieve protein associations from a number of diverse and independent reference sets. These reference sets contain known biological data that are either directly or indirectly linked to interactions between proteins. We quantified the average effect of using ranked protein interaction data to retrieve this information and showed that, when compared to randomly ranked interaction data sets, the proposed method created a larger enrichment (~134%) than either ranking based on the hypergeometric test (~109%) or occurrence ranking (~46%). Conclusions From our evaluations, it was clear that ranked interactions were always of value because higher-ranked PPIs had a higher likelihood of retrieving high-confidence experimental data. Reducing the noise inherent in aggregated experimental PPIs via our ranking scheme further increased the accuracy and enrichment of PPIs derived from a number of biologically relevant data sets. These results suggest that using our high-confidence protein interactions

  11. Challenge to Increase Confidence in Geological Evolution Models

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Iwatsuki, T.; Saegusa, H.; Kato, T.; Matsuoka, T.; Yasue, K.; Ohyama, T.; Sasao, E.

    2014-12-01

    The geological evolution models (GEMs) as well as site descriptive models (SDMs) are used to integrate investigation results and to support safety assessment. Even more, enhancing confidence in long-term stability of geological environment is required for geological disposal in Japan where is in active tectonic region. The aim of the study is to provide future direction for increasing GEMs confidence based on review of current GEMs. GEMs has been constructed in following three steps; 1) Features, Events and Processes (FEP) analysis, 2) Scenario development and 3) Numerical modeling. Base on the current status, we looked at the issues for developing GEMs with higher level of confidence. As the result, development of techniques and methodologies for; 1) validation of GEMs, 2) handling uncertainty and 3) digitalization/visualization are identified as open issues. To solve these issues, we specified three approaches. First approach is using multiple lines of evidence. Consistency between various study fields will be important information for validation of the GEMs. Second one is revealing the argument behind GEMs. Confidence/uncertainty of GEMs will be able to be confirmed by synthesizing the basic information behind the GEMs because GEMs are built on many evidences, hypothesis and assumptions. In addition, the optional cases will be needed for demonstrating the level of understanding. Third is development of elemental technology, such as the integrated system between numerical simulation and visualization which can take into account large size of model and composite phenomenon. In the future, we will focus on increasing GEMs confidence in keeping with this notion. This study was carried out under a contract with METI (Ministry of Economy, Trade and Industry) as part of its R&D supporting program for developing geological disposal technology.

  12. Relating the Content and Confidence of Recognition Judgments

    PubMed Central

    Selmeczy, Diana; Dobbins, Ian G.

    2014-01-01

    The Remember/Know procedure, developed by Tulving (1985) to capture the distinction between the conscious correlates of episodic and semantic retrieval, has spurned considerable research and debate. However, only a handful of reports have examined the recognition content beyond this dichotomous simplification. To address this, we collected participants’ written justifications in support of ordinary old/new recognition decisions accompanied by confidence ratings using a 3-point scale (high/medium/low). Unlike prior research, we did not provide the participants with any descriptions of Remembering or Knowing and thus, if the justifications mapped well onto theory, they would do so spontaneously. Word frequency analysis (unigrams, bigrams, and trigrams), independent ratings, and machine learning techniques (Support Vector Machine - SVM) converged in demonstrating that the linguistic content of high and medium confidence recognition differs in a manner consistent with dual process theories of recognition. For example, the use of ‘I remember’, particularly when combined with temporal or perceptual information (e.g., ‘when’, ‘saw’, ‘distinctly’), was heavily associated with high confidence recognition. Conversely, participants also used the absence of remembering for personally distinctive materials as support for high confidence new reports (‘would have remembered’). Thus, participants afford a special status to the presence or absence of remembering and use this actively as a basis for high confidence during recognition judgments. Additionally, the pattern of classification successes and failures of a SVM was well anticipated by the Dual Process Signal Detection model of recognition and inconsistent with a single process, strictly unidimensional approach. “One might think that memory should have something to do with remembering, and remembering is a conscious experience.”(Tulving, 1985, p. 1) PMID:23957366

  13. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  14. A nonparametric fiducial interval for the Youden index in multi-state diagnostic settings.

    PubMed

    Batterton, Katherine A; Schubert, Christine M

    2016-01-15

    The Youden index is a commonly employed metric to characterize the performance of a diagnostic test at its optimal point. For tests with three or more outcome classes, the Youden index has been extended; however, there are limited methods to compute a confidence interval (CI) about its value. Often, outcome classes are assumed to be normally distributed, which facilitates computational formulas for the CI bounds; however, many scenarios exist for which these assumptions cannot be made. In addition, many of these existing CI methods do not work well for small sample sizes. We propose a method to compute a nonparametric interval about the Youden index utilizing the fiducial argument. This fiducial interval ensures that CI coverage is met regardless of sample size, underlying distributional assumptions, or use of a complex classifier for diagnosis. Two alternate fiducial intervals are also considered. A simulation was conducted, which demonstrates the coverage and interval length for the proposed methods. Comparisons were made using no distributional assumptions on the outcome classes and for when outcomes were assumed to be normally distributed. In general, coverage probability was consistently met, and interval length was reasonable. The proposed fiducial method was also demonstrated in data examining biomarkers in subjects to predict diagnostic stages ranging from normal kidney function to chronic allograph nephropathy. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26278275

  15. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

    PubMed Central

    2014-01-01

    Background Establishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears (39 males and 49 females) in Sweden. The animals were chemically immobilised by darting from a helicopter with a combination of medetomidine, tiletamine and zolazepam in April and May 2006–2012 in the county of Dalarna, Sweden. Venous blood samples were collected during anaesthesia for radio collaring and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown bears in Sweden. Results The following variables were not affected by host characteristics: red blood cell, white blood cell, monocyte and platelet count, alanine transaminase, amylase, bilirubin, free fatty acids, glucose, calcium, chloride, potassium, and cortisol. Age differences were seen for the majority of the haematological variables, whereas sex influenced only mean corpuscular haemoglobin concentration, aspartate aminotransferase, lipase, lactate dehydrogenase, β-globulin, bile acids, triglycerides and sodium. Conclusions The biochemical and haematological reference intervals provided and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears. PMID:25139149

  16. Probability Distribution for Flowing Interval Spacing

    SciTech Connect

    S. Kuzio

    2004-09-22

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  17. What is your savings personality? The 1998 Retirement Confidence Survey.

    PubMed

    Yakoboski, P; Ostuw, P; Hicks, J

    1998-08-01

    This Issue Brief presents the findings of the 1998 Retirement Confidence Survey (RCS). The survey tracks Americans' retirement planning and saving behavior and their confidence regarding various aspects of their retirement. It also categorizes workers and retirees into six distinct groups, based on their very different views on retirement, retirement planning, and saving. The six personality types identified in the RCS are Deniers (10 percent of the population), Strugglers (9 percent), Impulsives (20 percent), Cautious Savers (21 percent), Planners (23 percent), and Retiring Savers (17 percent). The survey shows that working Americans have become more focused on retirement; 45 percent have tried to determine how much they need to save before they retire, up from 32 percent in 1996. Americans' growing attention to their retirement has not increased their retirement income confidence. Since 1993, the portion of working Americans who are very confident that they will have enough money to live comfortably throughout retirement has consistently ranged from 20 percent to 25 percent. Sixty-three percent of Americans have begun to save for retirement. Fifty-five percent of those not saving for retirement say it is reasonably possible for them to save $20 per week (over $1,000 per year). In addition, 57 percent of workers who have begun to save say that it is reasonably possible for them to save an additional $20 per week. The findings demonstrate the continuing need for broad-based educational efforts designed to make retirement savings a priority for individuals. The good news is the evidence that education can have a real impact at the individual level. For the first time the 1998 RCS examined retirement planning, saving, and attitudes across ethnic groups (African-Americans, Hispanic-Americans, Asian-Americans, and whites). African-Americans are the least confident that they will have enough money to live comfortably in retirement. African-Americans and Hispanic

  18. Replication, falsification, and the crisis of confidence in social psychology.

    PubMed

    Earp, Brian D; Trafimow, David

    2015-01-01

    The (latest) crisis in confidence in social psychology has generated much heated discussion about the importance of replication, including how it should be carried out as well as interpreted by scholars in the field. For example, what does it mean if a replication attempt "fails"-does it mean that the original results, or the theory that predicted them, have been falsified? And how should "failed" replications affect our belief in the validity of the original research? In this paper, we consider the replication debate from a historical and philosophical perspective, and provide a conceptual analysis of both replication and falsification as they pertain to this important discussion. Along the way, we highlight the importance of auxiliary assumptions (for both testing theories and attempting replications), and introduce a Bayesian framework for assessing "failed" replications in terms of how they should affect our confidence in original findings. PMID:26042061

  19. An Overview of Space Exploration Simulation (Basis of Confidence) Documentation

    NASA Technical Reports Server (NTRS)

    Bray, Alleen; Hale, Joseph P.

    2006-01-01

    Models and simulations (M&S) are critical resources in the exploration of space. They support program management, systems engineering, integration, analysis, test, and operations by providing critical information that supports key analyses and decisions (technical, cost and schedule). Consequently, there is a clear need to establish a solid understanding of M&S strengths and weaknesses, and the bounds within which they can credibly support decision making. In this presentation we will describe how development of simulation capability documentation will be used to form a Basis of Confidence (Basis of Confidence) for National Aeronautics and Space Administration (NASA) M&S. The process by which BOC documentation is developed will be addressed, as well as the structure and critical concepts that are essential for establishing credibility of NASA's Exploration Systems Mission Directorate (ESMD) legacy M&S. We will illustrate the significance of BOC documentation in supporting decision makers and Accreditation Authorities in M&S risk management.

  20. Simulator effects on cognitive skills and confidence levels.

    PubMed

    Brannan, Jane D; White, Anne; Bezanson, Judy L

    2008-11-01

    Use of a human patient simulator (HPS) as a tool for experiential learning provides a mechanism by which students can participate in clinical decision making, practice skills, and observe outcomes from clinical decisions. The purpose of this study was to compare the effectiveness of two instructional methods to teach specific nursing education content, acute myocardial infarction, on junior-level nursing students' cognitive skills and confidence. The instructional methods included an interactive approach using the HPS method, compared with traditional classroom lecture. Results of this study suggest that use of a teaching strategy involving the HPS method made a positive difference in the nursing students' ability to answer questions on a test of cognitive skills. Confidence levels were not found to be significantly enhanced by use of the HPS method. PMID:19010047